The conventional wisdom about 21st century skills holds that students need to master the STEM subjects — science, technology, engineering and math — and learn to code as well because that’s where the jobs are. It turns out that is a gross simplification of what students need to know and be able to do, and some proof for that comes from a surprising source: Google.
All across America, students are anxiously finishing their “What I Want To Be …” college application essays, advised to focus on STEM (Science, Technology, Engineering, and Mathematics) by pundits and parents who insist that’s the only way to become workforce ready. But two recent studies of workplace success contradict the conventional wisdom about “hard skills.” Surprisingly, this research comes from the company most identified with the STEM-only approach: Google.
Sergey Brin and Larry Page, both brilliant computer scientists, founded their company on the conviction that only technologists can understand technology. Google originally set its hiring algorithms to sort for computer science students with top grades from elite science universities.
In 2013, Google decided to test its hiring hypothesis by crunching every bit and byte of hiring, firing, and promotion data accumulated since the company’s incorporation in 1998. Project Oxygen shocked everyone by concluding that, among the eight most important qualities of Google’s top employees, STEM expertise comes in dead last. The seven top characteristics of success at Google are all soft skills: being a good coach; communicating and listening well; possessing insights into others (including others different values and points of view); having empathy toward and being supportive of one’s colleagues; being a good critical thinker and problem solver; and being able to make connections across complex ideas.
Those traits sound more like what one gains as an English or theater major than as a programmer. Could it be that top Google employees were succeeding despite their technical training, not because of it? After bringing in anthropologists and ethnographers to dive even deeper into the data, the company enlarged its previous hiring practices to include humanities majors, artists, and even the MBAs that, initially, Brin and Page viewed with disdain.
Project Aristotle, a study released by Google this past spring, further supports the importance of soft skills even in high-tech environments. Project Aristotle analyzes data on inventive and productive teams. Google takes pride in its A-teams, assembled with top scientists, each with the most specialized knowledge and able to throw down one cutting-edge idea after another. Its data analysis revealed, however, that the company’s most important and productive new ideas come from B-teams comprised of employees who don’t always have to be the smartest people in the room.
Project Aristotle shows that the best teams at Google exhibit a range of soft skills: equality, generosity, curiosity toward the ideas of your teammates, empathy, and emotional intelligence. And topping the list: emotional safety. No bullying. To succeed, each and every team member must feel confident speaking up and making mistakes. They must know they are being heard.
Google’s studies concur with others trying to understand the secret of a great future employee. A recent survey of 260 employers by the nonprofit National Association of Colleges and Employers, which includes both small firms and behemoths like Chevron and IBM, also ranks communication skills in the top three most-sought after qualities by job recruiters. They prize both an ability to communicate with one’s workers and an aptitude for conveying the company’s product and mission outside the organization. Or take billionaire venture capitalist and “Shark Tank” TV personality Mark Cuban: He looks for philosophy majors when he’s investing in sharks most likely to succeed.
STEM skills are vital to the world we live in today, but technology alone, as Steve Jobs famously insisted, is not enough. We desperately need the expertise of those who are educated to the human, cultural, and social as well as the computational.
No student should be prevented from majoring in an area they love based on a false idea of what they need to succeed. Broad learning skills are the key to long-term, satisfying, productive careers. What helps you thrive in a changing world isn’t rocket science. It may just well be social science, and, yes, even the humanities and the arts that contribute to making you not just workforce ready but world ready.
Like a lot of children, my sons, Toby, 7, and Anton, 4, are obsessed with robots. In the children’s books they devour at bedtime, happy, helpful robots pop up more often than even dragons or dinosaurs. The other day I asked Toby why children like robots so much.
“Because they work for you,” he said.
What I didn’t have the heart to tell him is, someday he might work for them — or, I fear, might not work at all, because of them.
It is not just Elon Musk, Bill Gates and Stephen Hawking who are freaking out about the rise of invincible machines. Yes, robots have the potential to outsmart us and destroy the human race. But first, artificial intelligence could make countless professions obsolete by the time my sons reach their 20s.
You do not exactly need to be Marty McFly to see the obvious threats to our children’s future careers.
Say you dream of sending your daughter off to Yale School of Medicine to become a radiologist. And why not? Radiologists in New York typically earn about $470,000, according to Salary.com.
But that job is suddenly looking iffy as A.I. gets better at reading scans. A start-up called Arterys, to cite just one example, already has a program that can perform a magnetic-resonance imaging analysis of blood flow through a heart in just 15 seconds, compared with the 45 minutes required by humans.
Maybe she wants to be a surgeon, but that job may not be safe, either. Robots already assist surgeons in removing damaged organs and cancerous tissue, according to Scientific American. Last year, a prototype robotic surgeon called STAR (Smart Tissue Autonomous Robot) outperformed human surgeons in a test in which both had to repair the severed intestine of a live pig.
So perhaps your daughter detours to law school to become a rainmaking corporate lawyer. Skies are cloudy in that profession, too. Any legal job that involves lots of mundane document review (and that’s a lot of what lawyers do) is vulnerable.
Software programs are already being used by companies including JPMorgan Chase & Company to scan legal papers and predict what documents are relevant, saving lots of billable hours. Kira Systems, for example, has reportedly cut the time that some lawyers need to review contracts by 20 to 60 percent.
As a matter of professional survival, I would like to assure my children that journalism is immune, but that is clearly a delusion. The Associated Press already has used a software program from a company called Automated Insights to churn out passable copy covering Wall Street earnings and some college sports, and last year awarded the bots the minor league baseball beat.
What about other glamour jobs, like airline pilot? Well, last spring, a robotic co-pilot developed by the Defense Advanced Research Projects Agency, known as Darpa, flew and landed a simulated 737. I hardly count that as surprising, given that pilots of commercial Boeing 777s, according to one 2015 survey, only spend seven minutes during an average flight actually flying the thing. As we move into the era of driverless cars, can pilotless planes be far behind?
Then there is Wall Street, where robots are already doing their best to shove Gordon Gekko out of his corner office. Big banks are using software programs that can suggest bets, construct hedges and act as robo-economists, using natural language processing to parse central bank commentary to predict monetary policy, according to Bloomberg. BlackRock, the biggest fund company in the world, made waves earlier this year when it announced it was replacing some highly paid human stock pickers with computer algorithms.
So am I paranoid? Or not paranoid enough? A much-quoted 2013 studyby the University of Oxford Department of Engineering Science — surely the most sober of institutions — estimated that 47 percent of current jobs, including insurance underwriter, sports referee and loan officer, are at risk of falling victim to automation, perhaps within a decade or two.
Just this week, the McKinsey Global Institute released a report that found that a third of American workers may have to switch jobs in the next dozen or so years because of A.I.
I know I am not the only parent wondering if I can robot-proof my children’s careers. I figured I would start by asking my own what they want to do when they grow up.
Toby, a people pleaser and born entertainer, is obsessed with cars and movies. He told me he wanted to be either an Uber driver or an actor. (He is too young to understand that those jobs are usually one and the same).
As for Uber drivers, it is no secret that they are headed to that great parking garage in the sky; the company recently announced plans to buy 24,000 Volvo sport utility vehicles to roll out as a driverless fleet between 2019 and 2021.
And actors? It may seem unthinkable that some future computer-generated thespian could achieve the nuance of expression and emotional depth of, say, Dwayne Johnson. But Hollywood is already Silicon Valley South. Consider how filmmakers used computer graphics to reanimate Carrie Fisher’s Princess Leia and Peter Cushing’s Grand Moff Tarkin as they appeared in the 1970s (never mind that the Mr. Cushing died in 1994) for “Rogue One: A Star Wars Story.”
My younger son Anton, a sweetheart, but tough as Kevlar, said he wanted to be a football player. Robot football may sound crazy, but come to think of it, a Monday night battle between the Dallas Cowdroids and Seattle Seabots may be the only solution to the sport’s endless concussion problems.
He also said he wanted to be a soldier. If he means foot soldier, however, he might want to hold off on enlistment. Russia recently unveiled Fedor, a humanoid robot soldier that looks like RoboCop after a Whole30 crash diet; this space-combat-ready android can fire handguns, drive vehicles, administer first aid and, one hopes, salute. Indeed, the world’s armies are in such an arms race developing grunt-bots that one British intelligence expert predicted that American forces will have more robot soldiers than humans by 2025.
And again, all of this stuff is happening now, not 25 years from now. Who knows what the jobs marketplace might look like by then. We might not even be the smartest beings on the planet.
Ever heard of the “singularity”? That is the term that futurists use to describe a potentially cataclysmic point at which machine intelligence catches up to human intelligence, and likely blows right past it. They may rule us. They may kill us. No wonder Mr. Musk says that A.I. “is potentially more dangerous than nukes.”
But is it really that dire? Fears of technology are as old as the Luddites, those machine-smashing British textile workers of the early 19th century. Usually, the fears turn out to be overblown.
The rise of the automobile, to cite the obvious example, did indeed put most manure shovelers out of work. But it created millions of jobs to replace them, not just for Detroit assembly line workers, but for suburban homebuilders, Big Mac flippers and actors performing “Greased Lightnin’” in touring revivals of “Grease.” That is the process of creative destruction in a nutshell.
But artificial intelligence is different, said Martin Ford, the author of “Rise of the Robots: Technology and the Threat of a Jobless Future.”Machine learning does not just give us new machines to replace old machines, pushing human workers from one industry to another. Rather, it gives us new machines to replace us, machines that can follow us to virtually any new industry we flee to.
Since Mr. Ford’s book sent me down this rabbit hole in the first place, I reached out to him to see if he was concerned about all this for his own children: Tristan, 22, Colin, 17, and Elaine, 10.
He said the most vulnerable jobs in the robot economy are those involving predictable, repetitive tasks, however much training they require. “A lot of knowledge-based jobs are really routine — sitting in front of a computer and cranking out the same application over and over, whether it is a report or some kind of quantitative analysis,” he said.
Professions that rely on creative thinking enjoy some protection (Mr. Ford’s older son is a graduate student studying biomedical engineering). So do jobs emphasizing empathy and interpersonal communication (his younger son wants to be a psychologist).
Even so, the ability to think creatively may not provide ultimate salvation. Mr. Ford said he was alarmed in May when Google’s AlphaGo software defeated a 19-year-old Chinese master at Go, considered the world’s most complicated board game.
“If you talk to the best Go players, even they can’t explain what they’re doing,” Mr. Ford said. “They’ll describe it as a ‘feeling.’ It’s moving into the realm of intuition. And yet a computer was able to prove that it can beat anyone in the world.”
In one, Albert Wenger, an influential tech investor, promoted the Basic Income Guarantee concept. Also known as Universal Basic Income, this sunny concept holds that a robot-driven economy may someday produce an unlimited bounty of cool stuff while simultaneously releasing us from the drudgery of old-fashioned labor, leaving our government-funded children to enjoy bountiful lives of leisure as interpretive dancers or practitioners of bee-sting therapy, as touted by Gwyneth Paltrow.
The idea is all the rage among Silicon Valley elites, who not only understand technology’s power, but who also love to believe that it will be used for good. In their vision of a post-A.I. world without traditional jobs, everyone will receive a minimum weekly or monthly stipend (welfare for all, basically).
Another talk by David Autor, an economist, argued that reports of the death of work are greatly exaggerated. Almost 50 years after the introduction of the A.T.M., for instance, more humans actually work as bank tellers than ever. The computers simply freed the humans from mind-numbing work like counting out 20-dollar bills to focus on more cognitively demanding tasks like “forging relationships with customers, solving problems and introducing them to new products like credit cards, loans and investments,” he said.
Computers, after all, are really good at some things and, for the moment, terrible at others. Even Anton intuits this. The other day I asked him if he thought robots were smarter or dumber than humans. “Sdumber,” he said after a long pause. Confused, I pushed him. “Smarter and dumber,” he explained with a cheeky smile.
He was joking. But he also happened to be right, according to Andrew McAfee, a management theorist at the Massachusetts Institute of Technology whom I interviewed a short while later.
Discussing another of Anton’s career aspirations — songwriter — Dr. McAfee said that computers were already smart enough to come up with a better melody than a lot of humans. “The things our ears find pleasant, we know the rules for that stuff,” he said. “However, I’m going to be really surprised when there is a digital lyricist out there, somebody who can put words to that music that will actually resonate with people and make them think something about the human condition.”
Not everyone, of course, is cut out to be a cyborg-Springsteen. I asked Dr. McAfee what other jobs may exist a decade from now.
“I think health coaches are going to be a big industry of the future,” he said. “Restaurants that have a very good hospitality staff are not about to go away, even though we have more options to order via tablet.
“People who are interested in working with their hands, they’re going to be fine,” he said. “The robot plumber is a long, long way away.”
In the era of artificial intelligence, robots and more, higher education is arguably more important than ever. Academic researchers are producing the ideas that lead to technology after technology. On the other hand, a challenge exists for higher education: how to produce graduates whose careers won’t be derailed by all of these advances. Now that robots can pick stocks, this isn’t just about factory jobs, but the positions that college graduates have long assumed were theirs.
Northeastern University is involved in both sides of that equation. Its academic programs in engineering, computer science and other fields are producing these breakthroughs. And its students — at an institution known for close ties to employers — of course want good careers. Joseph E. Aoun, Northeastern’s president, explores these issues in Robot-Proof: Higher Education in the Age of Artificial Intelligence (MIT Press). Aoun is a scholar in linguistics when he’s not focused on university administration. His book argues that changes in the college curriculum are needed to prepare students in this new era, but that doesn’t mean ignoring the humanities or general education.
Q: How worried should college graduates be about being replaced by technology? Is it likely that many jobs today held by those with college degrees will be replaced by robots or some form of technology?
A: Smart machines are getting smarter, and many of the jobs performed by people today are going to disappear. Some studies predict that half of all U.S. jobs are at risk within the next 20 years. And it’s not just blue-collar jobs; today intelligent machines are picking stocks, doing legal research and even writing news articles. Simply put, if a job can be automated in the future, it will be.
For higher education to meet this challenge — for us to make people robot-proof — we need to change. In my book, I offer a blueprint for how we can accomplish this. We will need to re-envision the curriculum, invest in experiential education and put lifelong learning at the heart of what we do. It will not be easy, but we have a responsibility — to the students of today and tomorrow — to change the way we do business.
Q: In an era of adaptive learning and online learning, should faculty members be worried about their jobs in the future?
A: We’re seeing educational content become commoditized. Therefore, the job of faculty members has to go beyond simply transmitting knowledge. More than ever, the priority for faculty is to create new knowledge and act as the catalysts to make their students robot-proof. The personal connection between student and teacher cannot be replaced by a machine.
But, like students, faculty members must act to meet the challenge of today’s world and should embrace the transformation of higher education that I describe in my book.
Q: What is “humanics,” and what are the three kinds of literacy that you want colleges to teach?
A: Humanics is the curriculum for a robot-proof education. It is based on the purposeful integration of technical literacies, such as coding and data analytics, with uniquely human literacies, such as creativity, entrepreneurship, ethics, cultural agility and the ability to work with others.
The key is integration. We need to break down the academic silos that separate historians from engineers.
When I talk to employers, they tell me that they would give their right arm for more systems thinkers — quarterbacks who can see across disciplines and analyze them in an integrated way. And every student should be culturally agile, able to communicate across boundaries, and to think ethically. By integrating technology, data and humanities, we can help students become robot-proof.
Q: In your vision for the future of higher education, is this about embedding these skills into existing programs or starting from scratch?
A: Higher education has the elements for a robot-proof model, but we need to be much more intentional about how we integrate them. As I’ve mentioned, our curriculum needs to change so that technical and human literacies are unified.
We need to deliver this curriculum in an experiential way. This means recognizing that learning happens beyond the classroom through co-ops and meaningful internships. I truly believe that experiential education is the most powerful way to learn.
Still, no one is going to be set for life. We need to commit to lifelong learning in a way that we haven’t done in the past. Universities have been engaged in lifelong learning for many years, but it is usually treated as a second-class operation. We need to bring lifelong learning to the core of our mission.
This will require us to rethink the way we deliver education, particularly to working professionals who don’t have time to be on campus every day. Online and hybrid delivery modes will be essential. We have to meet learners wherever they are — in their careers and around the world.
Credentials will need to be unbundled so that learners don’t have to commit to long-term degree programs. Stackable certificates, badges and boot camps will become the norm.
These changes won’t happen by themselves. Institutions should establish authentic partnerships with employers, redesign courses to fill gaps that employers actually need and connect them with students through co-ops and internships.
Q: How is Northeastern getting ready for these changes?
A: Northeastern has designed its academic plan to meet the challenges — and opportunities — presented by smart machines. Beyond the curricular changes required by humanics, and our leadership in experiential learning, we are building a multicampus network spanning different cities, regions and countries. Learners will be able to gain access to this network wherever they are and whenever it’s convenient for them.
Throughout its history, higher education has adapted to changes in the world. Knowing what we know about the revolution of smart machines, we have a responsibility to remain relevant and an opportunity to make our learners robot-proof.
“Everything that is old is new again!” Daniel Rabuzzi exclaims, his eyes light up with excitement that seems to match the glowing, handcrafted flower pinned on his vest. He’s talking about the next wave of the Maker Movement, big news buzzing amongst makers in the inner circle.
Rabuzzi is the executive director of Mouse, a national nonprofit that encourages students to create with technology. The organization, now celebrating 20 years in operation, is part of the worldwide Maker Movement, encouraging students to get creative (and messy) when using technology to build things. Rabuzzi calls his work at Mouse “shop and home economics for the 21st century,” and his students “digital blacksmiths.”
Rabuzzi, like many experts within the Maker Movement, believes the heavy emphasis on standardized testing in schools, which has pushed the arts, shop and home economics into the shadows, is what spurred outside groups like Mouse to begin hosting alternative makerspaces for students. Throughout the years, Rabuzzi has seen the movement evolve. Most recently, he’s seen technology become more directly integrated with making, along with an uptick of women in leadership.
“It can’t just be the boys tinkering in the basement anymore,” says Rabuzzi, pointing to women in maker leadership, like littleBits founder Ayah Bdeir, who encouraged more young girls to enter the space.
Now Rabuzzi, along with makers, investors, and journalists, are buzzing about what they describe as the next wave of making: the Maker economy, which many believe will transform manufacturing the United States by integrating with the Internet of Things (IOT), augmented reality (AR), virtual reality (VR) and artificial intelligence (AI).
“There is all this talk about bringing back manufacturing to America, and I feel like this is going to come back on a local level,” says Juan Garzon, former Mouse student, who started his hardware company. He believes that personalized goods designed and manufactured by Makers through mediums like 3D printing will drive the return of domestic manufacturing.
“The future of manufacturing is not a big plant, but someone designing what they want and developing custom made things. It sounds so sci-fi, but it is within my lifetime,” continues Garzon.
News reports from Chicago Inno show that custom manufacturing designed by makers might be an active part of the domestic economy sooner than Garzon realizes. Inno reports that several Maker-entrepreneur spaces are popping up in the city with hopes to develop places where creators can build scalable products to be manufactured, creating new businesses.
For many, talk of 3D printing and merging Making with AI are bleeding edge topics, far away from today’s realities. But for technologists supporting Mouse, this the world they want to prepare students to be a part of.
Mouse students at the 20th-anniversary party are already getting started. At the event, some students proudly showed off projects they designed in 3D spaces that can be viewed and altered in virtual reality. Many of the projects students worked on required a mixture of creativity, technical skills and awareness of the societal needs. Displays showcasing green energy projects along with digitalized wearable technology for persons with disabilities were all throughout the room. Still, Rabuzzi imagines more.
He hopes that through making, students can test the limits of new technologies and do good for the society. “How do we use Alexa and Siri in the Maker Movement?” Rabuzzi wonders aloud. He describes his idea of using AI to support students in designing, prototyping and creating new learning pathways in future, but admits that he doesn’t have the funding or technology for such ambitious projects now. He hopes that some of Mouse’s corporate funding partners are interested in supporting the endeavors.
“We are preparing today’s young people for a cyber future,” he explains. “In the old days if you had a clever idea you had to go into a big company to get it done. Now you can make it yourself.”
Mountain View, Calif. — THE humanities are kaput. Sorry, liberal arts cap-and-gowners. You blew it. In a software-run world, what’s wanted are more engineers.
At least, so goes the argument in a rising number of states, which have embraced a funding model for higher education that uses tuition “bonuses” to favor hard-skilled degrees like computer science over the humanities. The trend is backed by countless think pieces. “Macbeth does not make my priority list,” wrote Vinod Khosla, a co-founder of Sun Microsystems and the author of a widely shared blog post titled “Is Majoring in Liberal Arts a Mistake for Students?” (Subtitle: “Critical Thinking and the Scientific Process First — Humanities Later”).
The technologist’s argument begins with a suspicion that the liberal arts are of dubious academic rigor, suited mostly to dreamers. From there it proceeds to a reminder: Software powers the world, ergo, the only rational education is one built on STEM. Finally, lest he be accused of making a pyre of the canon, the technologist grants that yes, after students have finished their engineering degrees and found jobs, they should pick up a book — history, poetry, whatever.
As a liberal-arts major who went on to a career in software, I can only scratch my head.
Fresh out of college in 1993, I signed on with a large technology consultancy. The firm’s idea was that by hiring a certain lunatic fringe of humanities majors, it might cut down on engineering groupthink. After a six-week programming boot camp, we were pitched headfirst into the deep end of software development.
My first project could hardly have been worse. We (mostly engineers, with a spritzing of humanities majors) were attached to an enormous cellular carrier. Our assignment was to rewrite its rating and billing system — a thing that rivaled maritime law in its complexity.
I was assigned to a team charged with one of the hairier programs in the system, which concerned the movement of individual mobile subscribers from one “parent” account plan to another. Each one of these moves caused an avalanche of plan activations and terminations, carry-overs or forfeitures of accumulated talk minutes, and umpteen other causal conditionals that would affect the subscriber’s bill.
This program, thousands of lines of code long and growing by the hour, was passed around our team like an exquisite corpse. The subscribers and their parent accounts were rendered on our screens as a series of S’s and A’s. After we stared at these figures for weeks, they began to infect our dreams. (One I still remember. I was a baby in a vast crib. Just overhead, turning slowly and radiating malice, was an enormous iron mobile whose arms strained under the weight of certain capital letters.)
Our first big break came from a music major. A pianist, I think, who joined our team several months into the project. Within a matter of weeks, she had hit upon a method to make the S’s hold on to the correct attributes even when their parent A was changed.
We had been paralyzed. The minute we tweaked one bit of logic, we realized we’d fouled up another. But our music major moved freely. Instead of freezing up over the logical permutations behind each A and S, she found that these symbols put her in the mind of musical notes. As notes, they could be made to work in concert. They could be orchestrated.
On a subsequent project, our problem was pointers. In programming language, a pointer is an object that refers to some master value stored elsewhere. This might sound straightforward, but pointers are like ghosts in the system. A single misdirected one can crash a program. Our pointer wizard was a philosophy major who had no trouble at all with the idea of a named “thing” being a transient stand-in for some other unseen Thing. For a Plato man, this was mother’s milk.
I’ve worked in software for years and, time and again, I’ve seen someone apply the arts to solve a problem of systems. The reason for this is simple. As a practice, software development is far more creative than algorithmic.
The developer stands before her source code editor in the same way the author confronts the blank page. There’s an idea for what is to be created, and the (daunting) knowledge that there are a billion possible ways to go about it. To proceed, each relies on one part training to three parts creative intuition. They may also share a healthy impatience for the ways things “have always been done” and a generative desire to break conventions. When the module is finished or the pages complete, their quality is judged against many of the same standards: elegance, concision, cohesion; the discovery of symmetries where none were seen to exist. Yes, even beauty.
To be sure, each craft also requires a command of the language and its rules of syntax. But these are only starting points. To say that more good developers will be produced by swapping the arts for engineering is like saying that to produce great writers, we should double down on sentence diagraming.
Here the technologists may cry foul, say I’m misrepresenting the argument, that they’re not calling to avoid the humanities altogether, but only to replace them in undergraduate study. “Let college be for science and engineering, with the humanities later.” In tech speak, this is an argument for the humanities as plug-in.
But if anything can be treated as a plug-in, it’s learning how to code. It took me 18 months to become proficient as a developer. This isn’t to pretend software development is easy — those were long months, and I never touched the heights of my truly gifted peers. But in my experience, programming lends itself to concentrated self-study in a way that, say, “To the Lighthouse” or “Notes Toward a Supreme Fiction” do not. To learn how to write code, you need a few good books. To enter the mind of an artist, you need a human guide.
For folks like Mr. Khosla, such an approach is dangerous: “If subjects like history and literature are focused on too early, it is easy for someone not to learn to think for themselves and not to question assumptions, conclusions, and expert philosophies.” (Where some of these kill-the-humanities pieces are concerned, the strongest case for the liberal arts is made just in trying to read them.)
How much better is the view of another Silicon Valley figure, who argued that “technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing.”
When educator Lynn Koresh hears from kids that they want a career doing something with computers, she asks, “To do what with computers?”
Adults often encourage kids to pursue science, technology, engineering and math (STEM) skills, and computing classes are usually a first stop. But Koresh knows it’s the real-world applications of computational thinking and coding language skills that bring such knowledge to life.
She reasoned that most middle school students are already playing video games and might respond well to a unit on how to design, create, test and promote video games. Along the way, she’s also teaching them about digital citizenship and entrepreneurship.
“I wanted to give kids exposure to what it means to have a career using computers,” said Koresh, technology coordinator at Edgewood Campus School in Madison, Wisconsin.
She gave students the task of designing a game using Gamestar Mechanic. It’s a Web tool that helps kids create games. Before any programming begins, students talk about their games, set objectives and start storyboarding on paper. They think about the game’s avatars and how the game mechanics will work. Koresh shared her experience teaching this class at the Games Learning Society conference in Madison.
As students develop their games, they test them on one another throughout the semester. Koresh has found kids often give short and positive feedback, making it challenging to learn enough to improve the game. She says the kids respond this way mostly because they’re concerned for their friends and worry that they’ll get a bad grade, even though that’s not the case.
“You have to get specific enough so they don’t say, ‘It’s good, I liked it.’ You have to force them to take a stand.”
To help improve the process, she has reframed the questions around student game critiques in a consumer-oriented way, such as, “Would you pay 99 cents for this app? Would you give it three stars or four stars?”
To help them become more critical thinkers, the students read product reviews on blogs and business sites to learn about features that might improve the user experience. In the process, Koresh hopes the kids learn to be selective digital consumers and do research before making purchases or trusting a source.
It’s also an opportunity to talk about a person’s digital footprint and the types of comments, images and videos that can come back to haunt someone.
“If you put it online, it should be worthy of other students, grandma, everyone seeing it,” said Koresh.
Once the games are completed, the middle school students have three seconds to pitch their game to fourth-grade players in the form of a slide on a computer screen. Since time to persuade the audience is limited, much like in real life, game designers have to “sell” their game with one compelling slide. Students have to be selective about which elements of the game to highlight. Creating the slide is also an opportunity to talk about marketing.
“It’s great you’ve made something, but how do you get other people to use it?” Koresh asks her students. They get a good idea about how well their ad has worked based on the number of plays their games receive.
As for whether parents object to kids spending more time on video games, she says they have been supportive of STEM activities and pre-coding skills learned in game design. Koresh has found the time students spent on the games, both inside and outside class, has helped them think about coding as an extracurricular activity. Girls who have created games in her class have gone on to enter STEM design competitions.
Here are some of the ads Koresh’s students created that link to their games:
Jewelbots hopes to bring the old-school friendship bracelet into the iPhone age and teach girls to code with its smart jewelry.
The team behind the Kickstarter project — which has already raised double the $30,000 goal — has built an open-source wearable for teen and tween girls to encourage them to learn coding through basic logic.
The bracelets have four LEDs, a vibration motor and Bluetooth connectivity. They connect with each other to form a mesh network, which means a phone isn’t required to communicate with friends.
Out of the box, a Jewelbot can detect nearby friends and send secret messages, but with simple logic and a few taps it can be extended to do a lot more.
Extending the bracelet is straightforward, using a smartphone and a “if this then that” style workflow. It can be programmed, for example, to light up when a specific friend is nearby.
The bracelet can also be plugged into a computer via USB and developed on directly to create further extensions, using the Arduino integrated development environment (IDE).
The developers designed the project by working with groups of teen girls, who gave feedback on aesthetics and functionality.
The team has created two phases of prototypes already and plans a final round before testing and manufacturing begins later this year.
Jewelbots is the brainchild of CEO, Sara Chipps, and COO, Brooke Moreland, who set out to “inspire a deep curiosity and lasting love for computers and programming” using the devices.
The pair say they hope to get girls to “[open] their minds to science, technology, engineering and mathematics [STEM] at an age when many lose interest.
I love the idea of Jewelbots. It’s a tangible way to pique girls’ interest in coding and offers a path to getting them hooked. I know from first-hand experience that there’s nothing quite like coding something that can be touched and used in the real world.
The company also hosted ‘Bring Your Daughter To Hack‘ Events in New York and San Francisco las month, where kids were able to build their own wearables.
A single Jewelbot starts at $59 with a pack of two costing $89. They won’t ship until March 2016 and reward tiers are limited, so you’ll have to get in fast if you’re interested.