1. Brittany Webre
  2. http://www.math.txstate.edu/people/grad-students/webre.html
  3. Doctoral Research/Instructional Assistant
  4. Dynamic Geometry in Classrooms
  5. http://www.math.txstate.edu/dynamicgeometry/
  6. Texas State University
  1. Zhonghong Jiang
  2. Professor
  3. Dynamic Geometry in Classrooms
  4. http://www.math.txstate.edu/dynamicgeometry/
  5. Texas State University
  1. Shawnda Smith
  2. Dynamic Geometry in Classrooms
  3. http://www.math.txstate.edu/dynamicgeometry/
  4. Texas State University
  1. M. Alejandra Sorto
  2. Associate Professor
  3. Dynamic Geometry in Classrooms
  4. http://www.math.txstate.edu/dynamicgeometry/
  5. Texas State University
  1. Alexander White
  2. http://www.math.txstate.edu/people/faculty/white.html
  3. Associate Professor
  4. Dynamic Geometry in Classrooms
  5. http://www.math.txstate.edu/dynamicgeometry/
  6. Texas State University
Public
Choice
Public Discussion

Continue the discussion of this presentation on the Multiplex. Go to Multiplex

  • Icon for: Jackie DeLisi

    Jackie DeLisi

    Facilitator
    Research Scientist
    May 11, 2015 | 11:25 a.m.

    Interesting project. Did students in the control classrooms have other types of hands-on and engaging opportunities to explore the same mathematical content?

  • Icon for: Brittany Webre

    Brittany Webre

    Lead Presenter
    Doctoral Research/Instructional Assistant
    May 11, 2015 | 01:03 p.m.

    Yes, the control teachers engaged their students in the exploration of the same mathematical content by using patty paper, compass and rulers, protractors, Geoboards, Smartboard flash tiles, games, and/or student journals.

  • Icon for: Ben Sayler

    Ben Sayler

    Facilitator
    Professor of Physical Science and Mathematics
    May 11, 2015 | 03:36 p.m.

    Interesting indeed!

    I was struck by what I think i understood to be referred to as " lecture through activity-based methodology" as description of the Control Group pedagogy. Did I get that right? If so, could you elaborate?

    I’m also interested more generally in what differences were observed between CG and DG in terms of classroom instruction.

    I’m also curious if you calculated a Cohen’s effect size. I understand that the effects were statistically significant, but ’m not familiar enough with HLM to know if they were large or small or somewhere in between at each of the levels.

    Thanks!!

  • Sharon Strickland

    Guest
    May 12, 2015 | 10:05 a.m.

    Whoops! I can’t edit now, but I accidentally addressed part of your question in a new post below rather than as a reply.

  • Icon for: Ben Sayler

    Ben Sayler

    Facilitator
    Professor of Physical Science and Mathematics
    May 13, 2015 | 10:09 a.m.

    Excellent – thanks for describing the business as usual instruction below.

  • Icon for: Randy Kochevar

    Randy Kochevar

    Facilitator
    Senior Research Scientist
    May 11, 2015 | 06:56 p.m.

    This seems like a really robust study – well done! I’d love to hear your thoughts on what key principles you would take from this project, to apply to other computer-assisted learning programs.

  • Sharon Strickland

    Guest
    May 12, 2015 | 03:53 p.m.

    I’ll let other team members share their “take-aways” (and I might add more later myself). I tend to think a lot about structural features of the implementation. Firstly, we had district-level and school-level buy in from administration and that helped as principals often worked to schedule the labs for the teachers and request the software get loaded to machines. I suspect it also helped alleviate some fears that teachers might have had about trying new things and worrying supervisors might not agree with the DG approach. I also think PD mattered. Specifically, that the teachers used actual lessons they could later teach during their technical software training. Also, that there were many lesson plan supports right up front as well as spaced throughout the year.

  • Sharon Strickland

    Guest
    May 12, 2015 | 09:45 a.m.

    Hi Ben, I worked on the project and will share a tad until one of our official presenters can address your questions. I have some of the statistical analysis on my hard drive but I may not have the most up to date results and will see if someone else chimes in before I do so. As for the CG “lecture with activity” it is generally referring to what we called a “business as usual approach” based on their teaching practices already employed in geometry classrooms in the districts. They tended to use class time lecturing/leading discussions, and hands on activities such as with patty paper, MIRA, compass, etc. These activities were not strictly inquiry oriented but more so of the form: present new material, use a hands-on activity to reinforce, further understand/explore that material, verify the theorem/result through the activity. Given there were 30+ CG teachers and many school days, there was variation to this, of course. I think my description describes the typical approach.

  • Icon for: Alexander White

    Alexander White

    Co-Presenter
    Associate Professor
    May 12, 2015 | 10:37 a.m.

    Ben:
    I am involved on the statistical side of the project. We did compute effect sizes for each level of the classrooms. In the school districts geometry classes were split into different levels depending on the Standardized Test scores of the students and parent wishes. The three levels we included in our model were: Pre-Advanced Placement (or Honors), Regular and Middle School Pre-Advanced placement. The Pre-AP students were typically, but not exclusively 9th graders taking geometry. The regular class was mostly 10th graders. The middle school were mostly 8th graders. Not surprisingly, the differences in achievement between these groups was very large (both pre and post). Middle School out performed Pre-AP which outperformed Regular. So we computed effect sizes within each group and found effects to be: Regular = .55, Pre-AP .27, and 8th grade =.19. We found these results to be quite exciting, since the DG group outperformed the control group in each of the three levels (though the effect was small in the Middle School level) and the greatest effect was in the Regular group. This is important because the Regular was the largest group of students and one could argue is the group that needs the most help. Furthermore, doing a item by item analysis of the results on the Post test, we found that DG effect was quite robust, meaning that the DG students outperformed their control counterparts (this is controlling for pre-test, teaching experience and level of the course) on nearly every item of the test.

  • Icon for: Ben Sayler

    Ben Sayler

    Facilitator
    Professor of Physical Science and Mathematics
    May 13, 2015 | 10:10 a.m.

    Super. Very helpful response and compelling results!

  • Icon for: Karen Trujillo

    Karen Trujillo

    Math Snacks Outreach Director
    May 12, 2015 | 12:53 p.m.

    This is very exciting. I have a few questions about the study. First,how many classrooms were in the control and experimental group? I am sure it is somewhere, but I didn’t see it in the video. Second, did you find that the teachers in the DG group had facility with the technology. I know we have found various levels of comfort with the teachers we work with. And if so, how did they adapt to the use of technology?

  • Sharon Strickland

    Guest
    May 12, 2015 | 03:27 p.m.

    Hi, Again, although I’m not an official presenter I am a senior researcher on the project, specializing in the qualitative data, and can answer your question. There were 33 DG and 31 CG after attrition.

    As for adapting to the technology, the qualitative data analysis (I’m thinking of teachers’ monthly self-reports primarily) is ongoing. My early impressions are that DG teachers did not resist the technology per se, but at times ran into barriers such as lab access, software going down unexpectedly (district was rebooting systems, etc), and time constraints to fully use DG explorations in an already full calendar of lighting fast “coverage”. But if I answer what I think your question also considers: teachers’ personal comfort with the technology itself—most were fine. Some adapted amazingly quickly and took on leadership roles during PD. Some took longer to become comfortable, and these tended to express benefiting from the spaced out PDs throughout the school year that targeted upcoming content.

  • Icon for: Brittany Webre

    Brittany Webre

    Lead Presenter
    Doctoral Research/Instructional Assistant
    May 12, 2015 | 03:50 p.m.

    I would like to add to Dr. Strickland’s comment, that most of the schools in the participating districts had enough computer lab equipment for the DG group teachers to use the DG software (the grant money bought enough DG software licenses for each school) installed on those computers to do the teaching. We also asked help from the district administrators to make sure that the schools provided sufficient support to the teachers. As a matter of fact, most schools did provide the necessary help. In a few schools, teachers experienced various difficulties getting enough time for students to do the DG investigations in the computer labs.

    As for teachers’ comfort levels with the DG software from a quantitative angle – based on a total of 31 responses obtained from teachers who completed the DG Implementation Questionnaire, 29% of the teachers were at the high level of effectiveness, 61% of the teachers were at the middle level, and 10% of the teachers were at the low level. In addition, it seems that more teachers felt more comfortable than effective in using GSP in teaching. Only one teacher did not feel comfortable using GSP in teaching of geometry. An overwhelming majority of teachers (97%) felt very comfortable or somewhat comfortable in using GSP in teaching. Twenty-two teachers felt as effective as they felt comfortable in using GSP in teaching of geometry.

    If you are interested in more details, please let us know.

    DG Team

  • Sharon Strickland

    Guest
    May 12, 2015 | 04:02 p.m.

    Hi Brittany! Yes, you are correct—I did npt mean to imply that the teachers were not supported. But a few did run into some troubles—most self-reports indicate some hiccups with installation in that first month but after that were good to go. I was actually present for an observation one day (in a lab) when the district’s servers went down for maintenance. I felt really bad for the teacher. Those were not common events but notable things like that do appear in self-reports possibly more often because they stand out. On the other side, some principals actively took control of scheduling to make sure their DG teachers ha lab time priority.

  • Icon for: Wendy Smith

    Wendy Smith

    Assistant Director, Center for Science, Mathematics & Computer Education
    May 13, 2015 | 12:30 p.m.

    I was very interested in seeing this study and the positive impact of using dynamic geometry software on student achievement in geometry. I was concerned however, that a key finding highlighted here is that students in the DG condition were faster at proving. It’s pretty well established that, while fluency is important, focusing on speed does not foster the type of creative thinking and productive mindset in students that we want to accomplish.

  • Sharon Strickland

    Guest
    May 13, 2015 | 12:47 p.m.

    Hello Wendy,
    Yes, I think we would agree that speed is not the overarching goal we have for students and that deeper thinking and problem solving takes time and should be encouraged over speed.

    The video describes that in interviews with the teachers in the project, DG teachers were faster than CG teachers at developing proofs. As for students, we do not have data that says something similar with regards to speed. For student results we have that the DG students outperformed CG students on a posttest that was based on the state exam students would be expected to pass at the end of a high school geometry course—none of our data measured the speed at which the students completed the posttest (administered under same time constraints for both groups) . Also, because the posttest was based on the state exam, there were not proofs on it. But, we do also have student data on a Conjecturing and Proving Test (CPT) that specifically looked at those skills (the posttest was more broad and consisted of typical knowledge and application items). Maybe we can get Alex to describe findings from the CPT. Thanks for bringing that issue up.

  • Icon for: Alexander White

    Alexander White

    Co-Presenter
    Associate Professor
    May 13, 2015 | 01:04 p.m.

    Wendy:
    Just to piggy back on what Sharon was saying. The issue of speed had to do with semi-structured tasks where teachers (and separately students) were given new geometry problems to explore. The goal was for the participant to explore, conjecture and then prove their conjecture. In the control group the participants were given ruler, compass and protractor, while in the DG group there were allowed to use GSP. The interviews, which occurred at and during school, had a time limit due constraints of the school schedule. In this context, the DG participants were able to explore more cases more quickly and then move to making a conjecture. With the added time and help of the software, they were able to test their conjecture and in most cases prove the result. The control participants, who were restricted to classical tools, also tried create examples and explore the problem. However, the tools slowed them down. In some cases, they were unable to use the tools to construct an accurate figure which impeded their progress. In this sense, many did not have time to reach the level of proof. This of course was related to limited time. However, the sense of progress being made using GSP also provided motivation and momentum to the DG participants.

  • Further posting is closed as the showcase has ended.