[pdf version here: Hiltner–Technology and Critical Thinking]
In education, a tidal wave of technology is upon educators, administrators, and students. The message to teachers by students and the media is clear: get on your board; we are ready to ride. However, some conservatives, dubbed as technophobes, are hesitant to put on their flippers. There is a growing body of literature to suggest that the ubiquitous access to technology is really hurting us – young people and adults alike. The scientific research supporting either side of this argument is thin. At best, either side can cite a handful of sound scientific studies; at worst, each side has conjecture. So, what is best for students? Does American society’s constant connectedness to technology really hamper our ability to think critically, pay attention, and maintain focus?
Over the past 30 years, the culture of American youth has undergone significant change. Children of the 1980s grew up watching MTV, playing outside, and calling their friends on the telephone. Children of the 2010s are growing up constantly connected to the Internet, either via phone, tablet, or computer, choosing to Instant Message or chat on social media sites, rather than talk to, or play with, their friends. These changes have also influenced the way young people access information, the way information is produced, and the speed at which information travels. These drastic changes in behavior have happened over a short span of time. Researchers have dubbed these changes as phases of the Web (O’Reilly).
When the Internet was born in the mid-1980s in California, its infancy stage has become known as Web 1.0, which focused on making the Internet accessible to anyone, anywhere. In other words, democratizing access. Web 2.0 is said to have begun in the early 2000s and focused on increasing participation in the creation of content on the Internet, otherwise known as democratizing participation (O’Reilly). Some believe Web 3.0 is beginning now and is focused on virtual reality, virtual work, and choosing the virtual over reality, or democratizing immersion (Davidson).
A topic of great debate in the media today is whether technology and access to unlimited information is good for people. Some authors hypothesize the changes in behavior and constant access to the Internet has diminished humans’ capability to think, read, and write. On the other hand, some believe the constant access has changed the way humans think, read, and write, instead of diminishing it. These arguments have a profound impact on the world of education.
School districts across the United States are grappling with the decision to invest their money in technology. Do they invest in iPads or KindleFires, Chromebooks or Netbooks, 1:1 or carts? And some school districts and educators are questioning whether to invest at all. If this technology and pervasive access to the Internet is really hurting humans’ ability to think, read, and write, is supplying more technology what is best for students? Much of the research that exists today around the topic of technology and critical thinking skills is based on hypotheses and conjecture.
To understand the impact of technology on education, it is important to first understand the major differences between the 20th century and 21st century educational models and modes of thinking. Also, the topic of assessment is particularly important and will be examined in greater detail below.
Twentieth century. The model of education in the 20th century was developed based on the ideals of the Industrial Revolution. Robinson asserts that children are educated much like producers manufacture goods. Students enter the educational system based on their age, progress in a batch-like manner, and are produced at the end with a diploma. Davidson continues this analogy, asserting that many other ideas from the Industrial Revolution have pervaded the educational system. For instance, the qualities of production, like focus on a single task, became the expectations for students and teachers as well. Teachers and students in the twentieth century were expected to begin a task, work through the task, and complete the task without interruption, and then repeat the process with the next task, much like the assembly line worker. Davidson says, “Everything about twentieth century education and the workplace is designed to reinforce our attention to regular, systematic tasks that we take to completion. Attention to task is at the heart of industrial labor management” (6), which is the exact problem with education.
The United States has created a system of education based on a system that teaches skills necessary for an assembly-line economy. The United States is not an assembly-line country anymore. Industrialization is over; this is a world of globalization and digital citizenship, and new skills are needed to survive and thrive. Therefore, the method of educating children through assembly line procedures is outdated and promotes skills that are no longer valued, like attention to a single task (Davidson).
Twenty-first century. The twenty-first century schools look very much like the schools of the twentieth century. Schools operate in a departmental manner; students learn subjects like math, reading, and physical education in separate time-slots and move from one to the other, much like a factory would sound whistles to signify the beginning and/or end of the shift. However, the students of today are not the students of the Industrial Revolution. Prensky dubs today’s students as Digital Natives and says they are bored with the outdated modes of instruction (“Digital Natives”). “Among the top quartile of high school students, the most frequent complaint and cause of disaffection from schooling is boredom and lack of rigor. That also happens to be true among the lowest group” (Davidson 75). Students are looking for education that validates their lives and will be relevant to their futures, neither of which a 20th century school does.
Students in the twenty-first century need different skills to be successful in their future. Wagner, creator of the Seven Survival Skills of the 21st Century, asserts that twenty-first century learners will need the following skills to be competitive: “critical thinking and problem solving, collaboration across networks and leading by influence, agility and adaptability, initiative and entrepreneurship, effective oral and written communication, accessing and analyzing information, and curiosity and imagination” (“Tony Wagner’s Seven Survival Skills”). In the keynote address at a technology conference in 2012, Wagner stated, “The world doesn’t care what [kids] know, but what they can do with what they know” (TIES Technology Conference). Davidson agrees with the need for new skills and goes on to say, “Learning to think in multiple ways, with multiple partners, with a dexterity that cannot be computerized or outsourced, is no longer a luxury but a necessity” (77).
Assessment. Critics of technology assert that technology is making humans less intelligent; opponents to that idea point to the flaws in assessment tools to discredit the assertion. Binet, the creator of the standardized IQ test, recanted his belief that his test could accurately determine intelligence (qtd. in Davidson 118). Salthouse proved large variables in an individual’s performance on the same standardized test, asserting that each test-taker is a stand-alone bell curve on the same standardized test. “The Salthouse study, and the critique of testing that goes all the way back to its origins with Kelly and Binet, suggest…we may be teaching to contradictory, inconsistent, and inconclusive tests of lower-order thinking” (Davidson 123). Some researchers say that technology is diminishing our ability to think; however, other researchers say the methods we use to assess thinking are flawed.
New methods of assessment need to be created to better quantify learning of higher-order thinking skills. Davidson proposes that educators “need to be testing for more complex, connected and interactive skills” and historically, quantifying grades with a grade scale of A-F was accepted for lower-order skills but was viewed as unsuitable for higher-order or complex thinking (125). In addition to higher-order thinking skills, researchers must also be aware of the process and implications of brain development.
Many anxieties about how technology is diminishing people’s ability to think are based on an old idea of brain development. Researchers used to believe neural development was linear and static; brains would retain all knowledge once it had been learned and knowledge accumulated in the brain. It was thought the brain would lose its capacity to learn at some point in old age. Brain researchers now know this to be false. Davidson says, “The brain is not static. It is built for learning and is changed by what it encounters and what operations it performs. Retooled by the tools we use, our brain adjusts and adapts” (16).
The modern idea of brain plasticity, or the Hebbian principle, states neurons that fire together, wire together. In other words, “the more we repeat certain patterns of behavior (that’s the firing together), the more those behaviors become rapid, then reflexive, then automatic (that’s the wiring)” (Davidson 45). Also, when we do not repeat patterns of behavior or thought, those neural pathways wither away causing a loss of skill; this is called neural shearing. While neural shearing sounds negative, it is necessary. Infants lose 40% of their neurons before they grow up and if this neural shearing does not happen, one is considered mentally handicapped or disabled (Davidson).
Prensky asserts, “today’s students think and process information fundamentally different from their predecessors” due to neuroplasticity (“Do They Really Think Differently” 4). “Stimulation of various kinds actually changes brain structures and affects the way people think, and that these transformations go on throughout life” (13). Brains do not rewire casually, easily or arbitrarily; it takes sharply focused attention to rewire a brain and kids began this process in 1974 with the arrival of Pong (16). Kids today continue to rewire their brains through the use of technology; “they develop hypertext minds. They leap around. It is as though their cognitive structures were parallel, not sequential” (16). In fact, students’ brains are so different from the previous generations, “linear thought processes that dominate educational systems now can actually retard learning for brains developed through game and Web-surfing processes on the computer” (17).
Small and Vorgan claim the evolutionary process of thought has “rapidly emerged over a single generation and may represent one of the most unexpected yet pivotal advances in human history” (77). They cite a Stanford study that shows a correlation between the increased use of computers with a decreased amount of time spent in face-to-face conversations. They suggest that this has weakened the brains neural circuitry controlling human contact and will result in awkward social interactions and increased miscommunication. As a result, they have dubbed this the brain gap between generations; two separate cultures: one valuing an online life with declining social skills and the other feeling forced to adapt to high-technology usage.
A University of California, Los Angeles study tested the hypothesis that “computer searches and other online activities caused measureable and rapid alterations to brain and neural circuitry” (qtd. in Small and Vorgan 88). Their results showed it took less than five hours of Internet surfing to show measurable results on brain scans indicating neural rewiring.
The research is conclusive. People’s brains are changing because of technology. Davidson, however, maintains that this is not negative:
We humans tend to worry about the passing of what and who we once were, even though our memories, with distance, grow cloudy. When calculators were invented, people were concerned about the great mental losses that would occur because we no longer used slide rules. With programmable phones, people wonder if anyone will memorize phone numbers anymore. Both predictions have probably come true, but once we no longer think about the loss, the consequences stop seeming dire. (57)
Davidson insists those who worry about the losses should shift their perspective. They need to stop viewing the change as negative and focus on what society has to gain. The same perspective and argument can be made for the effects of reading and writing.
Effects on Reading and Writing
The pervasive use of technology has degraded the way people in society read. Carr acknowledges that Americans are reading more than they did in the 1970s and 1980s, but proposes that a different type of reading is occurring: more of a skimming practice than thoroughly reading. Carr, quoting Wolf, asserts that, “’we tend to become mere decoders of information.’ Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, remains largely disengaged.” Likewise, Deresiewicz states, “Lost is the propensity for sustained reading…Reading now means skipping and skimming” (314).
Wolf takes these assertions one-step further claiming that the reading brain has become unintentionally endangered. People need to become expert readers before they become immersed in a digital world. Wolf proposes that reading allows people time to think, which is a necessary skill for savvy Internet users.
Nielson approached the topic of reading from a different angle. In a study researching the usability of websites, Nielson found that the number one cause of difficulty navigating websites for teenagers was insufficient reading skills. Sub-par research skills and a very low patience level were following causes.
Similar to the research findings on reading, writing has increased in quantity but decreased in quality. Johnson found that many of the eight million bloggers today are teenagers chronicling their lives online. Thirty years ago teenagers were not chronicling their lives, but watching Laverne & Shirley. Johnson maintains that it is “…better to have minds actively composing the soap opera of their own lives than zoning out in front of someone else’s” (30).
The increase in online blog writing has resulted in what many believe is a decrease in the quality of what is being written. Shirky argues that Gutenberg economics has allowed for the increased publication of writing at the expense of quality. Gutenberg economics is the shift of publication control from the publisher to the creator. Historically, publishers would control what was published because it was directly related to their financial well-being. In the Internet age, everyone can be a producer; there is no gate-keeper refusing to publish work. This new principle of publication has increased freedom for creation, but has lowered the overall quality of what gets published. Shirky also argues that increasing the freedom of creation allows for more experimentation in what gets thought, said, and heard; therefore, society will see a “rapid fall in average quality, but over time experimentation pays off, diversity expands the range of the possible, and the best work becomes better than what went before” (326).
Keen, however, does not see the positive side of increased freedom for publication. He argues that no quality control in what gets published will result in a degradation of culture, resulting in a “cultural flattening” (247). Everyone will be an author, even the uneducated and inarticulate, and there will be no audience left. Siegel takes this idea a step further indicating that, “technology has turned back the clock from disinterested enjoyment of high and popular art to a primitive culture of crude, grasping self-interest” (303).
Davidson, however, has come to conclusions contrary to those previously presented. As a professor at Duke University, she monitored the writing performance of her students. She says of her students:
Their writing online, at least in their blogs, was incomparably better than in the traditional term papers they wrote for class. In fact, given all the tripe one hears from pundits about how the Internet dumbs our kids down, I was shocked that the elegant bloggers often turned out to be the clunkiest and most pretentious of research paper writers. (101)
Davidson also cites a longitudinal study conducted at Stanford by Lansford:
Lansford, a distinguished professor of rhetoric, used the same metric to evaluate the quality of writing of entering Stanford students year after year. Lunsford surprised everyone with her findings that students were becoming more literate, rhetorically dexterous, and fluent – not less, as many feared. Using the same evaluative criteria year after year, she could not prove any deleterious effects on the writing of the exceptionally gifted students at Stanford from exposure to the Internet. (101)
Based on the results of these two studies, one formal and longitudinal, and the other an informal observation of her own classroom, Davidson believes the Internet has increased the quality of academic writing in the college setting. But, can people think?
Effects on Thinking and Reflection
The effect of technology on people’s ability to think is a highly debated topic. Carr believes that Google is making the mind more efficient, increasing the ability to process information at faster speeds; however, this is superficial processing. He says, “The Net is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles.” Friedman, as paraphrased by Carr, said his thinking, “has taken on a ‘staccato’ quality.” Like Carr, Small and Vorgan have found that one’s ability to sift through large amounts of information quickly has improved, as has one’s ability to decide what is important and what is not. They also claim that many people are “developing neural circuitry that is customized for rapid and incisive spurts of directed concentration” (95), and that average IQs are increasing with the advancement of digital culture. Carr cites research to support the proposition that IQs have been on the rise; however, he is careful to point out that IQs have been on the rise since World War II and cautions the public not to credit the rising IQs to the adoption of technology. Flynn, as cited by Carr, claims these rises in IQ have less to do with true intelligence gain and more to do with the methods we use to measure IQ and our beliefs about intelligence.
According to Deresiewicz, the art of reflection is a lost skill. Prensky claims that due to synapse pruning, the skill of reflection has been lost. Digital Natives do not know how to reflect and the trick to teaching them reflection is to embed it in their education using what Prensky terms as Digital Native language (“Digital Natives”).
Similarly, Deresiewicz believes that one’s ability to be alone has been lost. Historically, American society used to value solitude but today’s society values connectedness. To prove his point, Deresiewicz makes the comparison of TV to the Internet. Television eliminated the need for someone to learn how to keep him or herself busy, resulting in the lost ability to be still and idle. This was the creation of boredom. Internet has done the same thing: eliminated the need to be alone, resulting in the lost ability to be alone. This inability to be alone has ramifications of a lack of religiosity, lack of ability to reflect, and lack of appreciation for one’s own depth.
Many researchers are concerned about the effects of technology on people’s attention span. Prensky asserted that the problem is not that Digital Natives cannot pay attention; it is that they choose not to pay attention (“Digital Nation”). Digital Natives need interactivity to maintain focus, particularly in education. Robinson and Davidson believe the problem is not with students’ ability to pay attention; there is no increase in attention deficit disorder (ADD). They assert that school is the problem; education today does not sustain students’ interest. Students are living in the most distracting, media-rich environment to date, yet schools look the same as they did 100 years ago.
Focus, even in the absence of technology, is difficult to maintain when there is nothing capturing our attention. Davidson writes about Rheingold’s experiment with his students, which showed their difficulty maintaining prolonged attention. He asked his students to turn off all devices, close their eyes, and sit in silence for five minutes. Then he asked them to chart how their mind works. Rheingold’s observations always showed that students struggled to map what or how their brain focused on anything. Even with no direction or interruption, maintaining focus on any one thought for a sustained amount of time was difficult (278-279).
Closely related to shortening attention spans is the theory of continuous partial attention. This theory is described as “continually staying busy – keeping tabs on everything while never truly focusing on anything” (Small and Vorgan 92). This is different than multitasking, where one has a purpose for every action. Instead, while one pays partial attention, one looks for opportunities to make short connections. Turkle agrees with the theory of continuous partial attention. She shared a personal experience that a day had gone by that she worked only with her calendar and email and at the end of the day she realized she had not engaged in any deep, meaningful thought at all. She found this concerning (“Digital Nation”).
Another concern in regards to attention span is the declining ability to maintain focus. Jackson considers a study showing the effects of television on children’s ability to pay attention. While the television was on in the room, children’s attention spans were negatively affected. If television has a negative effect, what is the Internet doing to children? People are born to be interrupt-driven and pay partial attention, but people must try to maintain focus.
Maintaining focus can be difficult, especially with dinging email notifications, hyperlinks, and music all at our fingertips. Oppenheimer says this instant gratification of email notifications, hyperlinks, and music “bifurcate the brain, keep it from being able to pursue linear thought and teaches you that you should be able to have every urge answered in the minute the urge occurs” (“Digital Nation”). He urges schools to slow down and maintain sustained conversation about any topic without distraction by machines, as schools are the last place this can happen. Jenkins, however, says everyone, everywhere, in every time has been distracted. He poses that this is not a new issue; therefore, society should not be worried. Instead, he encourages open-mindedness in the new era of technological adoption (“Digital Nation”).
Multitasking is not a new idea; however, researchers are having new conversations about multitasking. Davidson proposes that walking while carrying something qualifies as multitasking because a person is performing two actions at one time. The same can be said for driving, playing basketball, or various other tasks. According to numerous researchers, the reason one does not think of these tasks as multitasking is because of the Hebbian principle: neurons that fire together, wire together. When actions are repeated, the actions become automatic and require less mental processing power. Many of the tasks performed in daily life are not perceived as multitasking, although they accomplish multiple tasks at one time.
Stanford researcher, Nass, designed a study to research young people’s ability to multitask. His findings made headlines by pronouncing that even the best multitaskers were worse at multitasking than those who monotask. Nass says of multitaskers, “[they are] dumbing down the world” (“Digital Nation”).
The validity of Nass’s study, however, has been highly scrutinized. Davidson believes that Nass’s official finding of “’reduced ability to filter out interference’ is actually a new mashup style of mental blending that helps one be a good multitasker” (283). Even the researchers themselves, in the fine print of their study, acknowledge using an outdated method of interruption for their experiment. In direct contradiction to Nass, Small and Vorgan have found that one’s ability to multitask without errors is improving with the advancement of digital culture.
Whether multitasking is good or bad, the culture of multitasking is here to stay. Digital Natives are used to multitasking, believe they are good at it, and like receiving information quickly (Prensky, “Do They Really Think Differently”). This behavior has changed the way students behave and the way teachers teach; education now needs to be the distractor from the technology to be effective (“Digital Nation”).
Jackson proposes that there are many dangers of multitasking. “Reading email while talking on the phone involves reading and then chatting, chatting and then reading” (278). She poses that the back and forth switch costs, the mental delay, or lost thread of thought, do not outweigh the benefits of multitasking. Meyer believes multitasking “exemplifies a head-down, tunnel vision way of life that values materialism over happiness, productivity over insight and compassion” (qtd. in Jackson 289).
The scientific research in all of these areas is lacking; however, the media has offered a lot of conjecture and hypotheses to support their suppositions. Society today is in the midst of great change. The United States’ educational system is based on ideals from the Industrial Revolution and schools are preparing students for a world that no longer exists using assessment methods no longer deemed valid. Technology and pervasive access to the Internet are changing the structure of the modern brain, some believe with deleterious effects. Reading and writing, thinking and reflective practices have changed compared with previous generations. Two bodies of thought surround these changed practices: some argue skills have diminished and others argue skills have merely changed.
Society is, and has historically been, skeptical and even downright negative about the effects of any new technology. Society was concerned about the degradation of math skills with the advent of the abacus, and again with the advent of the slide-rule, and again with the advent of the calculator. Similar concerns about multitasking were present with the rising popularity of the car, and then expounded when radios were added to those cars. Society was overly concerned with the perceived loss of skills. However, here we are, decades later, and our society has not crumbled, nor have we seen a complete desecration of our ability to compute basic mathematics or our abilities to pay attention to the radio and drive simultaneously. I believe the Internet will not be the downfall of a generation. Our thoughts are not “shallow”, as Nicholas Carr would have us believe. Our brains are changing, but change is not negative. It simply is change.
If educators and administrators make substantive changes in how we approach the classroom and the use of technology, I believe society would see an antiquated system of education transform into the 21st century schools that we know are a possibility today. A generation of teenagers would be excited to come to school, engaged in their education, and in control of their learning. We would see new skills emerge and unnecessary skills fade. We would see true, educational, systematic change. Educators and mental health professionals would be prepared to help students become well-rounded, well-adjusted, productive, in-demand members of our global society.
Carr, Nicholas. “Is Google Making Us Stupid?” Atlantic Monthly (2008): Web. 12 August 2013.
—. The Shallows: What the Internet is Doing to Our Brains. New York: W. W. Norton &
Company, Inc, 2011. Print.
Davidson, Cathy. Now You See It: How Technology and Brain Science Will Transform
Schools and Business for the 21st Century. New York: Penguin Group, 2011. Print.
Deresiewicz, William. “The End of Solitude.” The Digital Divide: Arguments For and Against
Facebook, Google, Texting, and the Age of Social Networking. Ed. Mark Bauerlein. New
York: Jeremy P. Tarcher/Penguin, 2011. 307-317. Print.
“Digital Nation.” Frontline. PBS. 2010. Online video.
Facione, Peter, and Noreen Facione. “The Holistic Critical Thinking Scoring Rubric – HCTSR.”
Insight Assessment. 2009. Web. 10 August 2013.
“FAQs: General Critical Thinking.” Insight Assessment, n.d. Web. 8 August 2013
Jackson, Maggie. “Judgment: Of Molly’s Gaze and Taylor’s Watch.” The Digital Divide:
Arguments For and Against Facebook, Google, Texting, and the Age of Social
Networking Ed. Mark Bauerlein. New York: Jeremy P. Tarcher/Penguin, 2011. 271-294.
Johnson, Steven. “The Internet.” The Digital Divide: Arguments For and Against Facebook,
Google, Texting, and the Age of Social Networking Ed. Mark Bauerlein. New York:
Jeremy P. Tarcher/Penguin, 2011. 26-33. Print.
Keen, Andrew. “Web 2.0.” The Digital Divide: Arguments For and Against Facebook, Google,
Texting, and the Age of Social Networking. Ed. Mark Bauerlein. New York: Jeremy P.
Tarcher/Penguin, 2011. 242-249. Print.
Nielson, Jakob. “Usability of Websites for Teenagers.” The Digital Divide: Arguments For and
Against Facebook, Google, Texting, and the Age of Social Networking. Ed. Mark
Bauerlein. New York: Jeremy P. Tarcher/Penguin, 2011. 52-62. Print.
O’Reilly, Tim. “What is Web 2.0.” The Digital Divide: Arguments For and Against Facebook,
Google, Texting, and the Age of Social Networking. Ed. Mark Bauerlein. New York:
Jeremy P. Tarcher/Penguin, 2011. 215-229. Print.
Prensky, Mark. “Digital Native, Digital Immigrants.” The Digital Divide: Arguments For and
Against Facebook, Google, Texting, and the Age of Social Networking. Ed. Mark Bauerlein. New York: Jeremy P. Tarcher/Penguin, 2011. 3-11. Print.
—. “Do They Really Think Differently?” The Digital Divide: Arguments For and
Against Facebook, Google, Texting, and the Age of Social Networking. Ed. Mark
Bauerlein. New York: Jeremy P. Tarcher/Penguin, 2011. 12-25. Print.
Robinson, Ken. “Changing Education Paradigms.” TED Talk. RSA Animate. 2010.
Salthouse, Timothy. (2007). “Implications of Within-Person Variability in Cognitive and
Neuropsychological Functioning for the Interpretation of Change.” Neuropsychology 21.4 (2007): 401-411. Web. 12 August 2013.
Shirky, Clay. “Means.” The Digital Divide: Arguments For and Against Facebook, Google,
Texting, and the Age of Social Networking. Ed. Mark Bauerlein. New York: Jeremy P.
Tarcher/Penguin, 2011. 318-334. Print.
Siegel, Lee. “A Dream Come True.” The Digital Divide: Arguments For and Against Facebook,
Google, Texting, and the Age of Social Networking. Ed. Mark Bauerlein. New York:
Jeremy P. Tarcher/Penguin, 2011. 295-306. Print.
Small, Gary, and Gigi Vorgan. “Your Brain is Evolving Right Now.” The Digital Divide:
Arguments For and Against Facebook, Google, Texting, and the Age of Social Networking. Ed. Mark Bauerlein. New York: Jeremy P. Tarcher/Penguin, 2011. 76-98. Print.
Wagner, Tony. TIES Technology Conference, Hyatt Hotel, Minneapolis, MN. 11 Dec 2012.
—. “Tony Wagner’s seven survival skills.” Tony Wagner Transforming Education. n.d. Web. 12
Wolf, Maryanne. “Learning to Think in a Digital World.” The Digital Divide: Arguments For
and Against Facebook, Google, Texting, and the Age of Social Networking. Ed. Mark
Bauerlein. New York: Jeremy P. Tarcher/Penguin, 2011. 34-37. Print.
Learn more about Jennifer Hiltner on our Contributors page