The Flexibility of Computational Thinking

Edutopia

Three middle school projects—in English, math, and history—use computational thinking skills to address social justice topics.

 

Two students looking at data on a laptop with worksheets scattered in front of them
Courtesy of Eli Sheldon
Students carefully plot their next maneuver to grab land in the simulation Scramble for Africa.

Computational thinking (CT) is a set of skills students can leverage to tackle hard problems of all kinds using ideas from computer science. These skills include:

  • Algorithmic thinking: using a well-defined series of steps to achieve a desired outcome
  • Decomposition: approaching a complicated problem by focusing on one piece at a time
  • Abstraction: representing a complicated system with a simple model
  • Pattern recognition: analyzing data and using trends to inform solutions

CT can be used to address issues far beyond computer science, and projects with a social justice emphasis provide a platform for students to apply these skills to engaging, authentic learning opportunities. As Sydney Chaffee says in her TEDx talk on social justice in schools, “Authentic learning enables students to see and create connections in the world around them,” helping students understand why what they’re learning is vital.

Here are three examples of projects that teach social justice topics through a computational thinking lens.

REFORMING THE AMERICAN CRIMINAL JUSTICE SYSTEM

In this English language arts unit, seventh-grade students study the American criminal justice system while reading Walter Dean Myers’s Monster, a novel about a teen’s trial and imprisonment. In tandem with reading the novel, students conduct a mock trial of the familiar character Batman, using a flowchart to demonstrate the steps (the algorithm) that criminal suspects encounter. Batman is a useful defendant because students generally know about him and agree that his actions are illegal, though they disagree on whether he should be punished for those actions.

Harnessing the power of decomposition to break down our complicated legal system, student teams design their own justice systems. They establish policies on 12 topics, including drugs, mandatory minimums, body cameras, and juvenile detention. Using Twine, a platform for interactive storytelling, teams apply their laws to six real criminal cases. The outcome of each case is determined by that team’s previously established policies, which can be revised at the conclusion of the case.

Students come to realize that decisions that initially seemed obvious lead to unexpected consequences when applied in different real-world scenarios. They walk away from this unit having internalized the need for major reform in our criminal justice system, as well as why some policy changes are not as straightforward as they first appear.

SIMULATING THE SCRAMBLE FOR AFRICA

In this social studies simulation adapted from a lesson by Andrew Patterson, students in grades 7–9 represent the interests of major European powers in the colonization of Africa. Using abstraction with a simplified set of objectives (e.g., resource types, geographic regions, and climates) and a gridded map of the continent, students choose specific squares to claim each round. The amount of land they claim is dependent on their country’s relative strength and colonial focus in that era.

At the end of the game, students compare their results with the true outcome of African colonization at the start of the 20th century. Despite the simplistic nature of the simulation, the correlation between maps is typically strong. The political and logistical nuances have been abstracted away, helping students understand the high-level motivations and decisions involved.

During the first run, students let their competitive nature show without hesitation. However, they are then tasked with confronting the deeper impact of their colonizing efforts—for instance, what it really meant for these European armies to claim African land, abduct slaves for labor, and exhaust natural resources. In subsequent rounds, teams struggle to balance the morals behind their actions with their desire to “win” the game.

EVALUATING RACIAL BIAS IN TRAFFIC STOPS

In this series of math lessons on probability and population sampling, seventh-grade students calculate rates of drivers of different races being searched at traffic stops. They compare their findings to census data to determine if the numbers represent random sampling or show evidence of racial bias.

To set context, students learn about their legal rights during traffic stops and why race matters during interactions with police. Next, they create tree diagrams with data from the Bureau of Justice Statistics to determine probabilities of being stopped by race, which they contrast with general population data. Using pattern recognition, students interpret the data (e.g., 10 percent of all drivers are African American, but 23 percent of all searched drivers are African American) and form evidence-backed conclusions about racial bias in traffic stops across the nation.

Finally, students again use random sampling on local police data to compare search rates in their county against their national results, bringing the issue even closer to home.

BRINGING SOCIAL JUSTICE AND COMPUTATIONAL THINKING TO YOUR CLASSROOM

Activities tied to issues of social justice can bolster learning in any class, and by approaching these topics with a CT lens, students can more easily draw connections across disciplines. Here are some tips to be successful:

  • Establish a respectful, safe atmosphere in your classroom by allowing yourself to be vulnerable and by ensuring that all students feel heard.
  • Current events are a valuable source of ideas that students will naturally connect with.
  • Consider allowing students to voice what issues matter most to them.
  • Don’t shy away from controversial topics, as these can lead to the richest discussions.
Advertisements

The Future of Coding in Schools

Edutopia

Mitch Resnick, one of the creators of Scratch, on why he thinks coding should be taught in all schools—it’s not the reason you’d expect.

For more than three decades, Mitch Resnick has immersed himself in educational technology and innovative learning models. Now a professor at the MIT Media Lab, and a co-creator of the popular Scratch programming language, Resnick remains a tireless advocate for student-centered education, collaborative learning environments, and the idea that coding is a form of literacy.

His new book, Lifelong Kindergarten: Cultivating Creativity Through Projects, Passion, Peers, and Play, is a look at our current educational moment. “Roughly two-thirds of grade school students will end up doing work that hasn’t been invented yet,” Resnick contends, hinting at the emerging worlds of artificial intelligence, self-driving cars, and “smart” houses. How do we prepare today’s students to meet that challenge?

Get the best of Edutopia in your inbox each week.

We talked with Resnick about the importance of coding in our school system, his thoughts on the changing roles of teachers, and new ways to engage students—and assess their work.

EDUTOPIA: You moved from journalism—writing about computers and business—to the field of educational technology and learning in the 1980s. What inspired that move?

MITCH RESNICK: The most important shift for me in thinking about computers and learning was actually the spring of 1982, the West Coast Computer Faire—which is like an early form of Maker Faire—and Seymour Papert was giving a keynote address. When I heard Seymour talk, it gave me new vision of what role computers might play in people’s lives: They weren’t just machines to get a job done—they could enable people to express themselves in new ways, and change the way people thought about themselves and thought about the world. That was very exciting to me.

EDUTOPIA: Are we still struggling with Papert’s early insight—almost astonishing at the time—that the computer isn’t just a processor of information but a platform for constructing human knowledge?

RESNICK: Yes I think so, and it mirrors a struggle in the education system that has nothing to do with technology. Many people think of learning and education as a process of delivering information or delivering instruction. Other people see learning and education as student-centered—learning is about exploring, experimenting, creating. Those are very different visions that predate the computer, but of course the computer can fit into either of those two models. It’s a wonderful device for delivering information, but it can also be a wonderful device for creating, exploring, and experimenting.

EDUTOPIA: There are influential people, like Apple CEO Tim Cook, saying, “What we need to do is get coding into every single public school. It needs to be a requirement in public schools across the board.” Is that right?

RESNICK: If it were up to me, I would introduce it. But I want to be careful because I don’t want to embrace it for the same reason that some people might. The first question I would ask is: “Why should we learn coding at all?” Many people embrace coding in schools as a pathway to jobs as computer programmers and computer scientists, and of course they’re right that those opportunities are expanding rapidly. But that’s not a great reason for everyone to learn how to code.

Very few people grow up to be professional writers, but we teach everyone to write because it’s a way of communicating with others—of organizing your thoughts and expressing your ideas. I think the reasons for learning to code are the same as the reasons for learning to write. When we learn to write, we are learning how to organize, express, and share ideas. And when we learn to code, we are learning how to organize, express, and share ideas in new ways, in a new medium.

EDUTOPIA: What does that look like in the school system? Does coding sit alongside math and reading? Is it integrated in some way?

RESNICK: These days I talk about our approach in terms of these four words that begin with the letter p: projects, passion, peers, and play. So that’s the approach I would take with coding, but also with any other learning: getting students to work on projects, based on their passion, in collaboration with peers, in a playful spirit. And each of those p’s is important. I think work on projects gives you an understanding of the creative process, how to start with just the inkling of an idea and then to build a prototype, share it with people, experiment with it, and continue to modify and improve it.

We know that kids are going to work longer and make deeper connections to the content when they are passionate about the ideas—when they care—and when they’re learning with and being inspired by peers. And I’d want to have kids experience coding in the same way.

EDUTOPIA: You’re describing a high-choice learning environment rooted in student passion and project work. Where’s the teacher in that mix?

RESNICK: The teacher still plays an incredibly important role, but in this approach it’s not so much about delivering instruction. One role the teacher is playing is the role of connector—connecting peers with one another to work together on solving problems. Teachers also act as catalysts by asking provocative questions: “What do you think will happen if…?” or “That surprised me, why do you think that happened?”

They’re consultants, too, and it’s not just about consulting on technical skills, but also about things like how you continue to work on something even when you are frustrated, or suggesting strategies for working with diverse groups of people. Finally, the teacher can be a collaborator, working together with kids on projects—because kids should see teachers as learners too.

EDUTOPIA: It sounds like a more democratic, open system, which seems to imply breaking down a lot of barriers?

RESNICK: I think breaking down barriers is a good way to think about it. When I think about the type of things that I might change in schools—and I know none of it is easy—a lot of it is about breaking down barriers. Break down the barriers between class periods, because 50-minute chunks are too constraining if you want to work on projects. Break down the barriers between disciplines, because meaningful projects almost always cut across disciplines. Break down the barriers between ages and have older kids work with younger kids—both groups benefit. And break down the barriers between inside of school and outside of school—have kids work on projects that are meaningful to their communities and bring people from the communities into the schools to support the teachers.

That’s one way of dealing with the challenge of a single teacher committed to 30 or more kids. It doesn’t have to be that way. Older kids can be helping younger kids, people from the community can be helping.

EDUTOPIA: A fair question—and a common criticism—is: How do you figure out whether kids are learning anything? How do you assess it?

RESNICK: I would take a portfolio-like approach, looking at what kids create. That’s what we do in our Scratch online community. You can see that a kid has created several dozen digital projects, and you can look through their projects and see their progression. For example, you might see the gradual adoption of new strategies—new types of artwork, but also new and improved programming structures.

I acknowledge that it’s difficult to arrive at quantitative measures, but I also think we each don’t necessarily need to. I sometimes make the analogy to the way I’ve been evaluated here at MIT. There are actually no quantitative measures in the process. Basically, they look at my portfolio: They see what I’ve created, they look at the trajectory and the progress over time, and they ask other people’s opinions about it. You’ll sometimes hear, “Well that’s not serious, we need quantitative measures to be serious.” Are they making the claim that MIT is not serious? I understand the criticism that it’s inefficient, but I think those are things we are going to need to deal with.

Again, it’s a big change and I’m not saying it’s easy, but I do think we need to move in that direction.

The surprising thing Google learned about its employees — and what it means for today’s students

 December 20, 2017

(Marcio Jose Sanchez/AP)

The conventional wisdom about 21st century skills holds that students need to master the STEM subjects — science, technology, engineering and math — and learn to code as well because that’s where the jobs are. It turns out that is a gross simplification of what students need to know and be able to do, and some proof for that comes from a surprising source: Google.

This post explains what Google learned about its employees, and what that means for students across the country.  It was written by Cathy N. Davidson, founding director of the Futures Initiative and a professor in the doctoral program in English at the Graduate Center, CUNY, and author of the new book, “The New Education: How to Revolutionize the University to Prepare Students for a World in Flux.” She also serves on the Mozilla Foundation board of directors,  and was appointed by President Barack Obama to the National Council on the Humanities.

By Cathy N. Davidson

All across America, students are anxiously finishing their “What I Want To Be …” college application essays, advised to focus on STEM (Science, Technology, Engineering, and Mathematics) by pundits and parents who insist that’s the only way to become workforce ready.  But two recent studies of workplace success contradict the conventional wisdom about “hard skills.” Surprisingly, this research comes from the company most identified with the STEM-only approach: Google.

Sergey Brin and Larry Page, both brilliant computer scientists, founded their company on the conviction that only technologists can understand technology. Google originally set its hiring algorithms to sort for computer science students with top grades from elite science universities.

In 2013, Google decided to test its hiring hypothesis by crunching every bit and byte of hiring, firing, and promotion data accumulated since the company’s incorporation in 1998. Project Oxygen shocked everyone by concluding that, among the eight most important qualities of Google’s top employees, STEM expertise comes in dead last. The seven top characteristics of success at Google are all soft skills: being a good coach; communicating and listening well; possessing insights into others (including others different values and points of view); having empathy toward and being supportive of one’s colleagues; being a good critical thinker and problem solver; and being able to make connections across complex ideas.

Those traits sound more like what one gains as an English or theater major than as a programmer. Could it be that top Google employees were succeeding despite their technical training, not because of it?  After bringing in anthropologists and ethnographers to dive even deeper into the data, the company enlarged its previous hiring practices to include humanities majors, artists, and even the MBAs that, initially, Brin and Page viewed with disdain.

Project Aristotle, a study released by Google this past spring, further supports the importance of soft skills even in high-tech environments. Project Aristotle analyzes data on inventive and productive teams. Google takes pride in its A-teams, assembled with top scientists, each with the most specialized knowledge and able to throw down one cutting-edge idea after another. Its data analysis revealed, however, that the company’s most important and productive new ideas come from B-teams comprised of employees who don’t always have to be the smartest people in the room.

Project Aristotle shows that the best teams at Google exhibit a range of soft skills: equality, generosity, curiosity toward the ideas of your teammates, empathy, and emotional intelligence. And topping the list: emotional safety. No bullying. To succeed, each and every team member must feel confident speaking up and making mistakes. They must know they are being heard.

Google’s studies concur with others trying to understand the secret of a great future employee. A recent survey of 260 employers by the nonprofit National Association of Colleges and Employers, which includes both small firms and behemoths like Chevron and IBM, also ranks communication skills in the top three most-sought after qualities by job recruiters. They prize both an ability to communicate with one’s workers and an aptitude for conveying the company’s product and mission outside the organization. Or take billionaire venture capitalist and “Shark Tank” TV personality Mark Cuban: He looks for philosophy majors when he’s investing in sharks most likely to succeed.

STEM skills are vital to the world we live in today, but technology alone, as Steve Jobs famously insisted, is not enough. We desperately need the expertise of those who are educated to the human, cultural, and social as well as the computational.

No student should be prevented from majoring in an area they love based on a false idea of what they need to succeed. Broad learning skills are the key to long-term, satisfying, productive careers. What helps you thrive in a changing world isn’t rocket science. It may just well be social science, and, yes, even the humanities and the arts that contribute to making you not just workforce ready but world ready.

Will Robots Take Our Children’s Jobs?

Photo

CreditRichie Pope

Like a lot of children, my sons, Toby, 7, and Anton, 4, are obsessed with robots. In the children’s books they devour at bedtime, happy, helpful robots pop up more often than even dragons or dinosaurs. The other day I asked Toby why children like robots so much.

“Because they work for you,” he said.

What I didn’t have the heart to tell him is, someday he might work for them — or, I fear, might not work at all, because of them.

It is not just Elon MuskBill Gates and Stephen Hawking who are freaking out about the rise of invincible machines. Yes, robots have the potential to outsmart us and destroy the human race. But first, artificial intelligence could make countless professions obsolete by the time my sons reach their 20s.

You do not exactly need to be Marty McFly to see the obvious threats to our children’s future careers.

Say you dream of sending your daughter off to Yale School of Medicine to become a radiologist. And why not? Radiologists in New York typically earn about $470,000, according to Salary.com.

But that job is suddenly looking iffy as A.I. gets better at reading scans. A start-up called Arterys, to cite just one example, already has a program that can perform a magnetic-resonance imaging analysis of blood flow through a heart in just 15 seconds, compared with the 45 minutes required by humans.

Maybe she wants to be a surgeon, but that job may not be safe, either. Robots already assist surgeons in removing damaged organs and cancerous tissue, according to Scientific American. Last year, a prototype robotic surgeon called STAR (Smart Tissue Autonomous Robot) outperformed human surgeons in a test in which both had to repair the severed intestine of a live pig.

Photo

Robots put together vehicle frames on the assembly line at the Peugeot Citroën Moteurs factory.CreditSebastien Bozon/Agence France-Presse — Getty Images

So perhaps your daughter detours to law school to become a rainmaking corporate lawyer. Skies are cloudy in that profession, too. Any legal job that involves lots of mundane document review (and that’s a lot of what lawyers do) is vulnerable.

Software programs are already being used by companies including JPMorgan Chase & Company to scan legal papers and predict what documents are relevant, saving lots of billable hours. Kira Systems, for example, has reportedly cut the time that some lawyers need to review contracts by 20 to 60 percent.

As a matter of professional survival, I would like to assure my children that journalism is immune, but that is clearly a delusion. The Associated Press already has used a software program from a company called Automated Insights to churn out passable copy covering Wall Street earnings and some college sports, and last year awarded the bots the minor league baseball beat.

What about other glamour jobs, like airline pilot? Well, last spring, a robotic co-pilot developed by the Defense Advanced Research Projects Agency, known as Darpa, flew and landed a simulated 737. I hardly count that as surprising, given that pilots of commercial Boeing 777s, according to one 2015 survey, only spend seven minutes during an average flight actually flying the thing. As we move into the era of driverless cars, can pilotless planes be far behind?

Then there is Wall Street, where robots are already doing their best to shove Gordon Gekko out of his corner office. Big banks are using software programs that can suggest bets, construct hedges and act as robo-economists, using natural language processing to parse central bank commentary to predict monetary policy, according to Bloomberg. BlackRock, the biggest fund company in the world, made waves earlier this year when it announced it was replacing some highly paid human stock pickers with computer algorithms.

So am I paranoid? Or not paranoid enough? A much-quoted 2013 studyby the University of Oxford Department of Engineering Science — surely the most sober of institutions — estimated that 47 percent of current jobs, including insurance underwriter, sports referee and loan officer, are at risk of falling victim to automation, perhaps within a decade or two.

Just this week, the McKinsey Global Institute released a report that found that a third of American workers may have to switch jobs in the next dozen or so years because of A.I.

I know I am not the only parent wondering if I can robot-proof my children’s careers. I figured I would start by asking my own what they want to do when they grow up.

Photo

Elon Musk, the C.E.O. of Tesla Motors. CreditMarcio Jose Sanchez/Associated Press

Toby, a people pleaser and born entertainer, is obsessed with cars and movies. He told me he wanted to be either an Uber driver or an actor. (He is too young to understand that those jobs are usually one and the same).

As for Uber drivers, it is no secret that they are headed to that great parking garage in the sky; the company recently announced plans to buy 24,000 Volvo sport utility vehicles to roll out as a driverless fleet between 2019 and 2021.

And actors? It may seem unthinkable that some future computer-generated thespian could achieve the nuance of expression and emotional depth of, say, Dwayne Johnson. But Hollywood is already Silicon Valley South. Consider how filmmakers used computer graphics to reanimate Carrie Fisher’s Princess Leia and Peter Cushing’s Grand Moff Tarkin as they appeared in the 1970s (never mind that the Mr. Cushing died in 1994) for “Rogue One: A Star Wars Story.

My younger son Anton, a sweetheart, but tough as Kevlar, said he wanted to be a football player. Robot football may sound crazy, but come to think of it, a Monday night battle between the Dallas Cowdroids and Seattle Seabots may be the only solution to the sport’s endless concussion problems.

He also said he wanted to be a soldier. If he means foot soldier, however, he might want to hold off on enlistment. Russia recently unveiled Fedor, a humanoid robot soldier that looks like RoboCop after a Whole30 crash diet; this space-combat-ready android can fire handguns, drive vehicles, administer first aid and, one hopes, salute. Indeed, the world’s armies are in such an arms race developing grunt-bots that one British intelligence expert predicted that American forces will have more robot soldiers than humans by 2025.

And again, all of this stuff is happening now, not 25 years from now. Who knows what the jobs marketplace might look like by then. We might not even be the smartest beings on the planet.

Ever heard of the “singularity”? That is the term that futurists use to describe a potentially cataclysmic point at which machine intelligence catches up to human intelligence, and likely blows right past it. They may rule us. They may kill us. No wonder Mr. Musk says that A.I. “is potentially more dangerous than nukes.”

But is it really that dire? Fears of technology are as old as the Luddites, those machine-smashing British textile workers of the early 19th century. Usually, the fears turn out to be overblown.

The rise of the automobile, to cite the obvious example, did indeed put most manure shovelers out of work. But it created millions of jobs to replace them, not just for Detroit assembly line workers, but for suburban homebuilders, Big Mac flippers and actors performing “Greased Lightnin’” in touring revivals of “Grease.” That is the process of creative destruction in a nutshell.

But artificial intelligence is different, said Martin Ford, the author of “Rise of the Robots: Technology and the Threat of a Jobless Future.”Machine learning does not just give us new machines to replace old machines, pushing human workers from one industry to another. Rather, it gives us new machines to replace us, machines that can follow us to virtually any new industry we flee to.

Since Mr. Ford’s book sent me down this rabbit hole in the first place, I reached out to him to see if he was concerned about all this for his own children: Tristan, 22, Colin, 17, and Elaine, 10.

He said the most vulnerable jobs in the robot economy are those involving predictable, repetitive tasks, however much training they require. “A lot of knowledge-based jobs are really routine — sitting in front of a computer and cranking out the same application over and over, whether it is a report or some kind of quantitative analysis,” he said.

Professions that rely on creative thinking enjoy some protection (Mr. Ford’s older son is a graduate student studying biomedical engineering). So do jobs emphasizing empathy and interpersonal communication (his younger son wants to be a psychologist).

Even so, the ability to think creatively may not provide ultimate salvation. Mr. Ford said he was alarmed in May when Google’s AlphaGo software defeated a 19-year-old Chinese master at Go, considered the world’s most complicated board game.

“If you talk to the best Go players, even they can’t explain what they’re doing,” Mr. Ford said. “They’ll describe it as a ‘feeling.’ It’s moving into the realm of intuition. And yet a computer was able to prove that it can beat anyone in the world.”

Looking for a silver lining, I spent an afternoon Googling TED Talks with catchy titles like “Are Droids Taking Our Jobs?”

Photo

“Rise of the Robots,” by Martin Ford.

In one, Albert Wenger, an influential tech investor, promoted the Basic Income Guarantee concept. Also known as Universal Basic Income, this sunny concept holds that a robot-driven economy may someday produce an unlimited bounty of cool stuff while simultaneously releasing us from the drudgery of old-fashioned labor, leaving our government-funded children to enjoy bountiful lives of leisure as interpretive dancers or practitioners of bee-sting therapy, as touted by Gwyneth Paltrow.

The idea is all the rage among Silicon Valley elites, who not only understand technology’s power, but who also love to believe that it will be used for good. In their vision of a post-A.I. world without traditional jobs, everyone will receive a minimum weekly or monthly stipend (welfare for all, basically).

Another talk by David Autor, an economist, argued that reports of the death of work are greatly exaggerated. Almost 50 years after the introduction of the A.T.M., for instance, more humans actually work as bank tellers than ever. The computers simply freed the humans from mind-numbing work like counting out 20-dollar bills to focus on more cognitively demanding tasks like “forging relationships with customers, solving problems and introducing them to new products like credit cards, loans and investments,” he said.

Computers, after all, are really good at some things and, for the moment, terrible at others. Even Anton intuits this. The other day I asked him if he thought robots were smarter or dumber than humans. “Sdumber,” he said after a long pause. Confused, I pushed him. “Smarter and dumber,” he explained with a cheeky smile.

He was joking. But he also happened to be right, according to Andrew McAfee, a management theorist at the Massachusetts Institute of Technology whom I interviewed a short while later.

Discussing another of Anton’s career aspirations — songwriter — Dr. McAfee said that computers were already smart enough to come up with a better melody than a lot of humans. “The things our ears find pleasant, we know the rules for that stuff,” he said. “However, I’m going to be really surprised when there is a digital lyricist out there, somebody who can put words to that music that will actually resonate with people and make them think something about the human condition.”

Not everyone, of course, is cut out to be a cyborg-Springsteen. I asked Dr. McAfee what other jobs may exist a decade from now.

“I think health coaches are going to be a big industry of the future,” he said. “Restaurants that have a very good hospitality staff are not about to go away, even though we have more options to order via tablet.

“People who are interested in working with their hands, they’re going to be fine,” he said. “The robot plumber is a long, long way away.”

Robot-Proof: Higher Education in the Age of Artificial Intelligence

Northeastern president discusses his new book on how higher education can train students for careers where technology cannot make them redundant.

September 12, 2017

In the era of artificial intelligence, robots and more, higher education is arguably more important than ever. Academic researchers are producing the ideas that lead to technology after technology. On the other hand, a challenge exists for higher education: how to produce graduates whose careers won’t be derailed by all of these advances. Now that robots can pick stocks, this isn’t just about factory jobs, but the positions that college graduates have long assumed were theirs.

Northeastern University is involved in both sides of that equation. Its academic programs in engineering, computer science and other fields are producing these breakthroughs. And its students — at an institution known for close ties to employers — of course want good careers. Joseph E. Aoun, Northeastern’s president, explores these issues in Robot-Proof: Higher Education in the Age of Artificial Intelligence (MIT Press). Aoun is a scholar in linguistics when he’s not focused on university administration. His book argues that changes in the college curriculum are needed to prepare students in this new era, but that doesn’t mean ignoring the humanities or general education.

Aoun, one of seven presidents honored today by the Carnegie Corporation for academic leadership, responded via email to questions about his new book.

Q: How worried should college graduates be about being replaced by technology? Is it likely that many jobs today held by those with college degrees will be replaced by robots or some form of technology?

A: Smart machines are getting smarter, and many of the jobs performed by people today are going to disappear. Some studies predict that half of all U.S. jobs are at risk within the next 20 years. And it’s not just blue-collar jobs; today intelligent machines are picking stocks, doing legal research and even writing news articles. Simply put, if a job can be automated in the future, it will be.

For higher education to meet this challenge — for us to make people robot-proof — we need to change. In my book, I offer a blueprint for how we can accomplish this. We will need to re-envision the curriculum, invest in experiential education and put lifelong learning at the heart of what we do. It will not be easy, but we have a responsibility — to the students of today and tomorrow — to change the way we do business.

Q: In an era of adaptive learning and online learning, should faculty members be worried about their jobs in the future?

A: We’re seeing educational content become commoditized. Therefore, the job of faculty members has to go beyond simply transmitting knowledge. More than ever, the priority for faculty is to create new knowledge and act as the catalysts to make their students robot-proof. The personal connection between student and teacher cannot be replaced by a machine.

But, like students, faculty members must act to meet the challenge of today’s world and should embrace the transformation of higher education that I describe in my book.

Q: What is “humanics,” and what are the three kinds of literacy that you want colleges to teach?

A: Humanics is the curriculum for a robot-proof education. It is based on the purposeful integration of technical literacies, such as coding and data analytics, with uniquely human literacies, such as creativity, entrepreneurship, ethics, cultural agility and the ability to work with others.

The key is integration. We need to break down the academic silos that separate historians from engineers.

When I talk to employers, they tell me that they would give their right arm for more systems thinkers — quarterbacks who can see across disciplines and analyze them in an integrated way. And every student should be culturally agile, able to communicate across boundaries, and to think ethically. By integrating technology, data and humanities, we can help students become robot-proof.

Q: In your vision for the future of higher education, is this about embedding these skills into existing programs or starting from scratch?

A: Higher education has the elements for a robot-proof model, but we need to be much more intentional about how we integrate them. As I’ve mentioned, our curriculum needs to change so that technical and human literacies are unified.

We need to deliver this curriculum in an experiential way. This means recognizing that learning happens beyond the classroom through co-ops and meaningful internships. I truly believe that experiential education is the most powerful way to learn.

Still, no one is going to be set for life. We need to commit to lifelong learning in a way that we haven’t done in the past. Universities have been engaged in lifelong learning for many years, but it is usually treated as a second-class operation. We need to bring lifelong learning to the core of our mission.

This will require us to rethink the way we deliver education, particularly to working professionals who don’t have time to be on campus every day. Online and hybrid delivery modes will be essential. We have to meet learners wherever they are — in their careers and around the world.

Credentials will need to be unbundled so that learners don’t have to commit to long-term degree programs. Stackable certificates, badges and boot camps will become the norm.

These changes won’t happen by themselves. Institutions should establish authentic partnerships with employers, redesign courses to fill gaps that employers actually need and connect them with students through co-ops and internships.

Q: How is Northeastern getting ready for these changes?

A: Northeastern has designed its academic plan to meet the challenges — and opportunities — presented by smart machines. Beyond the curricular changes required by humanics, and our leadership in experiential learning, we are building a multicampus network spanning different cities, regions and countries. Learners will be able to gain access to this network wherever they are and whenever it’s convenient for them.

Throughout its history, higher education has adapted to changes in the world. Knowing what we know about the revolution of smart machines, we have a responsibility to remain relevant and an opportunity to make our learners robot-proof.

Read more by

The Next Phase of the Maker Movement? Building Startups

Edsurge

The Next Phase of the Maker Movement? Building Startups
Zainab Oni, speaking at the Mouse 20th-anniversary event

“Everything that is old is new again!” Daniel Rabuzzi exclaims, his eyes light up with excitement that seems to match the glowing, handcrafted flower pinned on his vest. He’s talking about the next wave of the Maker Movement, big news buzzing amongst makers in the inner circle.

Rabuzzi is the executive director of Mouse, a national nonprofit that encourages students to create with technology. The organization, now celebrating 20 years in operation, is part of the worldwide Maker Movement, encouraging students to get creative (and messy) when using technology to build things. Rabuzzi calls his work at Mouse “shop and home economics for the 21st century,” and his students “digital blacksmiths.”

Mouse students showcasing green energy ideas

Rabuzzi, like many experts within the Maker Movement, believes the heavy emphasis on standardized testing in schools, which has pushed the arts, shop and home economics into the shadows, is what spurred outside groups like Mouse to begin hosting alternative makerspaces for students. Throughout the years, Rabuzzi has seen the movement evolve. Most recently, he’s seen technology become more directly integrated with making, along with an uptick of women in leadership.

“It can’t just be the boys tinkering in the basement anymore,” says Rabuzzi, pointing to women in maker leadership, like littleBits founder Ayah Bdeir, who encouraged more young girls to enter the space.

Now Rabuzzi, along with makers, investors, and journalists, are buzzing about what they describe as the next wave of making: the Maker economy, which many believe will transform manufacturing the United States by integrating with the Internet of Things (IOT), augmented reality (AR), virtual reality (VR) and artificial intelligence (AI).

“There is all this talk about bringing back manufacturing to America, and I feel like this is going to come back on a local level,” says Juan Garzon, former Mouse student, who started his hardware company. He believes that personalized goods designed and manufactured by Makers through mediums like 3D printing will drive the return of domestic manufacturing.

“The future of manufacturing is not a big plant, but someone designing what they want and developing custom made things. It sounds so sci-fi, but it is within my lifetime,” continues Garzon.

News reports from Chicago Inno show that custom manufacturing designed by makers might be an active part of the domestic economy sooner than Garzon realizes. Inno reports that several Maker-entrepreneur spaces are popping up in the city with hopes to develop places where creators can build scalable products to be manufactured, creating new businesses.

Audience members viewing Mouse student’s VR projects

For many, talk of 3D printing and merging Making with AI are bleeding edge topics, far away from today’s realities. But for technologists supporting Mouse, this the world they want to prepare students to be a part of.

Mouse students at the 20th-anniversary party are already getting started. At the event, some students proudly showed off projects they designed in 3D spaces that can be viewed and altered in virtual reality. Many of the projects students worked on required a mixture of creativity, technical skills and awareness of the societal needs. Displays showcasing green energy projects along with digitalized wearable technology for persons with disabilities were all throughout the room. Still, Rabuzzi imagines more.

He hopes that through making, students can test the limits of new technologies and do good for the society. “How do we use Alexa and Siri in the Maker Movement?” Rabuzzi wonders aloud. He describes his idea of using AI to support students in designing, prototyping and creating new learning pathways in future, but admits that he doesn’t have the funding or technology for such ambitious projects now. He hopes that some of Mouse’s corporate funding partners are interested in supporting the endeavors.

“We are preparing today’s young people for a cyber future,” he explains. “In the old days if you had a clever idea you had to go into a big company to get it done. Now you can make it yourself.”

To Write Better Code, Read Virginia Woolf

22hipps-master768

Mountain View, Calif. — THE humanities are kaput. Sorry, liberal arts cap-and-gowners. You blew it. In a software-run world, what’s wanted are more engineers.

At least, so goes the argument in a rising number of states, which have embraced a funding model for higher education that uses tuition “bonuses” to favor hard-skilled degrees like computer science over the humanities. The trend is backed by countless think pieces. “Macbeth does not make my priority list,” wrote Vinod Khosla, a co-founder of Sun Microsystems and the author of a widely shared blog post titled “Is Majoring in Liberal Arts a Mistake for Students?” (Subtitle: “Critical Thinking and the Scientific Process First — Humanities Later”).

The technologist’s argument begins with a suspicion that the liberal arts are of dubious academic rigor, suited mostly to dreamers. From there it proceeds to a reminder: Software powers the world, ergo, the only rational education is one built on STEM. Finally, lest he be accused of making a pyre of the canon, the technologist grants that yes, after students have finished their engineering degrees and found jobs, they should pick up a book — history, poetry, whatever.

As a liberal-arts major who went on to a career in software, I can only scratch my head.

Fresh out of college in 1993, I signed on with a large technology consultancy. The firm’s idea was that by hiring a certain lunatic fringe of humanities majors, it might cut down on engineering groupthink. After a six-week programming boot camp, we were pitched headfirst into the deep end of software development.

My first project could hardly have been worse. We (mostly engineers, with a spritzing of humanities majors) were attached to an enormous cellular carrier. Our assignment was to rewrite its rating and billing system — a thing that rivaled maritime law in its complexity.

I was assigned to a team charged with one of the hairier programs in the system, which concerned the movement of individual mobile subscribers from one “parent” account plan to another. Each one of these moves caused an avalanche of plan activations and terminations, carry-overs or forfeitures of accumulated talk minutes, and umpteen other causal conditionals that would affect the subscriber’s bill.

This program, thousands of lines of code long and growing by the hour, was passed around our team like an exquisite corpse. The subscribers and their parent accounts were rendered on our screens as a series of S’s and A’s. After we stared at these figures for weeks, they began to infect our dreams. (One I still remember. I was a baby in a vast crib. Just overhead, turning slowly and radiating malice, was an enormous iron mobile whose arms strained under the weight of certain capital letters.)

Our first big break came from a music major. A pianist, I think, who joined our team several months into the project. Within a matter of weeks, she had hit upon a method to make the S’s hold on to the correct attributes even when their parent A was changed.

We had been paralyzed. The minute we tweaked one bit of logic, we realized we’d fouled up another. But our music major moved freely. Instead of freezing up over the logical permutations behind each A and S, she found that these symbols put her in the mind of musical notes. As notes, they could be made to work in concert. They could be orchestrated.

On a subsequent project, our problem was pointers. In programming language, a pointer is an object that refers to some master value stored elsewhere. This might sound straightforward, but pointers are like ghosts in the system. A single misdirected one can crash a program. Our pointer wizard was a philosophy major who had no trouble at all with the idea of a named “thing” being a transient stand-in for some other unseen Thing. For a Plato man, this was mother’s milk.

I’ve worked in software for years and, time and again, I’ve seen someone apply the arts to solve a problem of systems. The reason for this is simple. As a practice, software development is far more creative than algorithmic.

The developer stands before her source code editor in the same way the author confronts the blank page. There’s an idea for what is to be created, and the (daunting) knowledge that there are a billion possible ways to go about it. To proceed, each relies on one part training to three parts creative intuition. They may also share a healthy impatience for the ways things “have always been done” and a generative desire to break conventions. When the module is finished or the pages complete, their quality is judged against many of the same standards: elegance, concision, cohesion; the discovery of symmetries where none were seen to exist. Yes, even beauty.

To be sure, each craft also requires a command of the language and its rules of syntax. But these are only starting points. To say that more good developers will be produced by swapping the arts for engineering is like saying that to produce great writers, we should double down on sentence diagraming.

Here the technologists may cry foul, say I’m misrepresenting the argument, that they’re not calling to avoid the humanities altogether, but only to replace them in undergraduate study. “Let college be for science and engineering, with the humanities later.” In tech speak, this is an argument for the humanities as plug-in.

But if anything can be treated as a plug-in, it’s learning how to code. It took me 18 months to become proficient as a developer. This isn’t to pretend software development is easy — those were long months, and I never touched the heights of my truly gifted peers. But in my experience, programming lends itself to concentrated self-study in a way that, say, “To the Lighthouse” or “Notes Toward a Supreme Fiction” do not. To learn how to write code, you need a few good books. To enter the mind of an artist, you need a human guide.

For folks like Mr. Khosla, such an approach is dangerous: “If subjects like history and literature are focused on too early, it is easy for someone not to learn to think for themselves and not to question assumptions, conclusions, and expert philosophies.” (Where some of these kill-the-humanities pieces are concerned, the strongest case for the liberal arts is made just in trying to read them.)

How much better is the view of another Silicon Valley figure, who argued that “technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing.”

His name? Steve Jobs.