Hofstra Horizons Research

How Children and Adults Think About Data

Amy Masnick, Ph.D.
Assistant Professor, Department of Psychology

How Children and Adults Think About Data

If the time on the wall clock is different from the time on your watch, how do you determine the “correct” time? If you roll a ball down a ramp and measure the distance it travels, will it travel the exact same distance if rolled a second time? If you see two golfers driving a golf ball many times, how can you determine which golfer is better? Children and adults regularly encounter data in a range of contexts. Although elementary school children are formally introduced to the concept of collecting and analyzing data in the science classroom, they encounter these concepts in informal contexts on a regular basis. Adults also need to reason about data in the context of their professions (for example, as scientists, policy makers, or teachers) and their daily activities.

Some of the data decisions people make are about identifying a “true” value – what is the exact time, or what is the exact measurement of the distance a ball has traveled? Engineers must determine the maximum weight that an airplane can carry. In formal contexts, scientists conduct experiments to answer such questions, taking repeated measurements to assess an average or mean value. Repeated measurements are important because researchers can be more confident if they obtain similar results on multiple trials. In addition, researchers are more confident in their results when the repeated measurements are similar rather than widely varying.

Sometimes in decision-making, the goal is to compare two sets of data to determine if they are different. For example, how can you determine which of two golfers is more skilled at driving a golf ball? If you are given a list of the lengths of six drives from each of the golfers, you have two sets of data to compare. In the simplest case, if all the drives for Golfer A are farther than all the drives for Golfer B, it’s easy to make a decision about which golfer is likely more skilled. However, as in considering measurement for assessing one single value, other data characteristics can play an important role. If the data for Golfer A are all within a 20-yard range while for Golfer B they are within a 100-yard range, you may feel more confident in ranking Golfer A’s ability than Golfer B’s. Without knowing anything further about the golfers, you may even begin to develop a story to explain the pattern: perhaps Golfer A is more disciplined and swings a golf club consistently with each drive, while Golfer B is more impulsive and therefore more erratic. If Golfer A’s drives are usually but not always farther than Golfer B’s, you may still be confident that Golfer A is a better golfer, but perhaps not quite as confident as you would be when Golfer A always drives the ball farther. If I then tell you that even though Golfer A on average drove a golf ball farther, Golfer A is actually an amateur hobbyist golfer and Golfer B is a professional golfer, you would likely quickly reassess the situation. Although the data are still very important, you would most likely conclude that the professional golfer was having an off day when you collected the data, or that the amateur golfer is an undiscovered prodigy.

Determining when a difference in two data sets is meaningful and when it is not (e.g., assessing whether there is a real difference in ability between two golfers) is an important but challenging task. How do children and adults address such questions of data interpretation? Those with training in statistics can use formal approaches that allow them to predict the likelihood of a future event or to state with confidence whether a result is likely to have occurred by chance. However, children (and many adults) do not have such formal training. In addition to lacking formal statistics training, children have less background knowledge than adults about the world in general, which makes it even more difficult for them to detect patterns in data or even to know what factors to consider in data assessment.

How can children determine when variation in the data matters? Data and background knowledge interact: the background knowledge people bring to understanding data helps in evaluating the data and, at the same time, evaluation of data helps form the theoretical background knowledge that is used to form new predictions and interpret subsequent data. This interplay leads to a bit of a chicken and- egg question: do we only build up our knowledge as we see and learn from new data, or do we only pay close attention to data when it does not fit with our past knowledge? For example, an adult drawing conclusions about weak shots from a professional golfer begins to come up with an explanation for the data (“He must have been having an off day,” “Perhaps he’s injured”). Similarly, children may begin to pay attention when they see data that conflict with their expectations. If a child understands that she is growing in height as she ages, she may be confused if her home measurement of being 100 cm tall one week is followed a week later by a measurement at the school nurse’s office of only 99.5 cm. Surely she cannot have lost height in the course of a week, so perhaps there is something flawed about one or both of the measurements. These inconsistencies between data and background knowledge can often inspire children to become curious about data. At the same time, there is some evidence that even when children directly observe phenomena, they do not always correctly process what they see. Thus, for example, when children believe – incorrectly – that two objects of different weights will fall to the ground at different speeds, they may simultaneously drop two objects of different weights, watch them fall at the same speed, and yet still be convinced that the heavier object fell faster (Chinn & Brewer, 2001).

Over the past several years, my colleagues and I have worked to learn more about how children and adults reason about data and outcomes from scientific experimentation.

Children’s understanding of error in scientific experimentation

In collaboration with Dr. David Klahr at Carnegie Mellon University, I looked at second and fourth grade children (average ages of about 8 and 10) and how they reasoned about data in a familiar context: reasoning about how far balls travel when they are rolled down ramps (Masnick & Klahr 2003). Experiments with ramps are common in elementary school science classrooms, and children are usually accurate in predicting which factors will affect the distance and speed with which balls travel. In our task, children worked with ramps that could be set to a low or high steepness, with a smooth or rough surface, and they could roll a golf ball or a rubber ball down the ramp. They were asked to set up two ramps in such a way that they could determine if a particular factor (such as the steepness of the ramp) made a difference in how far a ball rolled after it came down the ramp. We also asked the children to make predictions, then run the test and interpret the results, providing reasons for their conclusions. Children also answered questions about what might happen if the experiment were to be rerun and what might cause similar or different results in a replication.

To perform an ideal test, children should set up an unconfounded design; that is, they should set up the two ramps such that they are identical in all settings except for the one variable being tested. Thus, a good test of ramp steepness would consist of setting up one ramp at a high (steep) height and one at a low height, while setting up both ramps with smooth surfaces and using a golf ball to roll down each ramp. As found in earlier work (Chen & Klahr, 1999), a large portion of children set up confounded experiments (84 percent of the second graders and 60 percent of the fourth graders).

However, in providing reasons for why replications of experimental trials might vary somewhat, children demonstrated a more sophisticated understanding in that nearly all of the children provided many potential external reasons for variation in the data. They proposed factors such as error in the experimenter’s use of the stopwatch to time the run, or the ball knocking against the side of the ramp, or an accidental knock on the table that could have affected the outcome. They recognized that even though the basic setup was the same, there were minor variations that were likely to influence the outcome. In addition, they were confident that although the precise distance a ball traveled might differ from trial to trial, which ball went farther overall was far less likely to vary with a repeated test. That is, they distinguished between small variations attributed to “random” error, and larger variations attributed to more systematic factors. At the same time, despite expressing complex ideas, children often had difficulty integrating these ideas with their conclusions, and rarely mentioned such causes for variation as reasons for confidence (or lack thereof) in their conclusions.

Recently, we have been collecting data for a follow-up project in which we will look at how children and adults reason in a domain in which prior beliefs are often inaccurate: the pendulum task. This task is also familiar in elementary science classrooms, but most children (and many adults) have inaccurate beliefs about which factors affect a pendulum’s speed. In fact, the length of a pendulum is the most important factor in determining its speed (as many do believe), but the mass of the bob does not play a role (although many expect it to). Data that contradict one’s belief can often be processed differently than data that fit with one’s belief, and so we have been exploring how people reason about variations in data that fit or do not fit with prior beliefs. We are also looking at how different approaches to the testing of variables affect how much people learn from data. Thus, some participants are guided systematically by the experimenter in testing the variables in a controlled manner, and others are given the freedom to choose which combinations of variables they would like to test. Some recent research suggests that there are cases in which direct guided instruction can lead to better long-term learning outcomes than more open-ended discovery learning (Klahr & Nigam, 2004), and we are interested in learning some of the boundary conditions for that finding: when might a lot of structure help learning, and when might it hinder learning?

Children’s and adults’ understanding and use of data characteristics

In addition to considering the interplay between theory and data in the context of simple science experiments, another related line of work I have pursued (in collaboration with Dr. Brad Morris at Grand Valley State University) involves exploring the specific characteristics of data children and adults use. Although many researchers have demonstrated that theoretical background knowledge influences data interpretation (e.g., Chinn & Malhotra, 2002; Koslowski, 1996; Kuhn & Dean, 2004; Schauble, 1996), few have looked directly at how characteristics of data influence theoretical knowledge.

In a forthcoming paper, we describe work in which we asked third graders, sixth graders and college students to reason about sets of data (Masnick & Morris, in press). The data were presented as paired comparisons of outcomes (i.e., distance traveled) from either a robot or athlete using different sports balls (such as golf balls, baseballs and basketballs). On each trial, participants saw two columns of data. The data sets varied by sample size (number of data points) and by how much the data varied (for example, in some data sets, all the numbers in one column were larger than the numbers in the other column, while in other data sets, many of the numbers overlapped). Participants were asked to state their confidence that the two columns of data were the same or different from one another.

We found that although college students are much more likely to base their confidence ratings on data characteristics, even in the third grade, many students used sample size and variation (i.e., the differences between the numbers in each column) when rating how confident they were that there was a difference between sets. In addition, with age, they improved their ability to discuss the reasons for their confidence in terms of characteristics in the data. Interestingly, despite the fact that participants were deliberately given very little contextual background from which to draw conclusions, approximately half the participants in each age group came up with reasons for the data outcomes that relied on issues other than the data – factors that focused on theoretical background knowledge they inferred to explain the patterns of data. For example, many inferred that a robot would be less variable than a person, and suggested a reason for a difference in outcomes might be due to how much each ball was inflated or how aerodynamic each ball was.

In some follow-up work, we have been exploring other ways of looking at how people represent number sets in the brain. One strategy for assessing such representations involves looking at reaction times and accuracy in choosing which set has higher numbers. We have asked people to quickly compare number sets with varying characteristics, including sample size, difference between means, and range of the numbers. If reaction times and accuracy vary based on multiple data characteristics, it might suggest that cognitive representations of number sets include information about these characteristics, and that these characteristics may be considered either implicitly or deliberately in making judgments about the data.

Summary

Even without formal statistics training, many children and adults recognize and use data characteristics such as variation and sample size when drawing conclusions. However, context matters. Reasoning when data generally match prior expectations does not lead to belief revision, though even in these situations children do exhibit some beginning understanding of different sources of errors and variation in the data. Also, when children and adults use these data characteristics, they cannot always clearly articulate their reasoning. With age and experience, they talk more explicitly about how these characteristics play a role, but their reasoning is not always linked directly to their conclusions. There is still much to learn about how we learn and use data, and what factors lead to change over time. Understanding more about the cognitive representations we create of data may be one important step toward improving our understanding of this topic.

*Many of the issues discussed in this essay are described in greater detail in: Masnick, A.M., Klahr, D., & Morris, B.J. (2007). Separating signal from noise: Children’s understanding of error and variability in experimental outcomes. In M. Lovett & P. Shah (Eds.), Thinking With Data (pp. 3-26). New York: Lawrence Erlbaum Associates.

Acknowlegdements

Funding for the research described here was supported in part by NIMH training grant T32 MH19102, NICHHD (HD25211) to David Klahr, and NSF (HD25211) to David Klahr and partially subcontracted to Hofstra University.

References

Chen, Z., & Klahr, D. (1999). All other things being equal: Acquisition and transfer of the Control of Variables Strategy. Child Development, 70, 1098, 1120.

Chinn, C.A., & Brewer, W.F. (2001). Models of data: A theory of how people evaluate data. Cognition and Instruction, 19, 323-393.

Chinn, C A., & Malhotra, B. A. (2002). Children’s responses to anomalous scientific data: How is conceptual change impeded? Journal of Educational Psychology, 94, 327-343.

Klahr, D. & Nigam, M. (2004). The equivalence of learning paths in early science instruction: Effects of direct instruction and discovery learning. Psychological Science, 15, 661-667.

Koslowski, B. (1996). Theory and Evidence: The Development of Scientific Reasoning. Cambridge, MA: MIT Press.

Kuhn, D., & Dean, D., Jr. (2004). Connecting scientific reasoning and causal inference. Journal of Cognition and Development, 5, 261-288.

Masnick, A.M., & Klahr, D. (2003). Error matters: An initial exploration of elementary school children’s understanding of experimental error. Journal of Cognition and Development, 4, 67-98.

Masnick, A.M., & Morris, B.J. (in press). Investigating the development of data evaluation: The role of data characteristics. Child Development.

Schauble, L. (1996). The development of scientific reasoning in knowledge-rich contexts. Developmental Psychology, 32, 102-119.

Hofstra in the News


More In The News
Hofviews - See More Photos

Hofstra Weather

° F
Heat Index ° F / Wind Chill ° F
Humidity %
Wind mph / Direction °
Rain in
See More from the Project WX Weather Stations

Recent Faculty News

Archives