|
Co-intelligence, living and working with AI(2024) - Ethan Mollick
Introduction : THREE SLEEPLESS NIGHTS
PART I
1 CREATING ALIEN MINDS
· Scary? Smart? Scary-Smart?
2 ALIGNING THE ALIEN
· Artificial Ethics for Alien Minds
3 FOUR RULES FOR CO-INTELLIGENCE
· Principle 1: Always invite AI to the table.
· Principle 2: Be the human in the loop.
· Principle 3: Treat AI like a person (but tell it what kind of person it is).
· Principle 4: Assume this is the worst AI you will ever use.
PART II
4 AI AS A PERSON
· Three Conversations
· Sparks
5 AI AS A CREATIVE
· Automatic Creativity
· Out-Inventing Humans
· Adding AI to Creative Work
· The Meaning of Creative Work
6 AI AS A COWORKER
· Tasks and the Jagged Frontier
· Tasks for Me, Tasks for AI
· Centaurs and Cyborgs
· Secret Task Automation
· From Tasks to Systems
· From Systems to Jobs
7 AI AS A TUTOR
· After the Homework Apocalypse
· Teaching about AI
· Flipped Classrooms and AI Tutors
8 AI AS A COACH
· Building Expertise in the Age of AI
· When Everyone Is an Expert
9 AI AS OUR FUTURE
· Scenario 1: As Good as It Gets
· Scenario 2: Slow Growth
· Scenario 3: Exponential Growth
· Scenario 4: The Machine God
Epilogue : AI AS US
Acknowledgments
Notes
-------------------------------------------------------------------------
From Wharton professor and author of the popular One Useful Thing Substack newsletter Ethan Mollick comes the definitive playbook for working, learning, and living in the new age of AI
Something new entered our world in November 2022 — the first general purpose AI that could pass for a human and do the kinds of creative, innovative work that only humans could do previously. Wharton professor Ethan Mollick immediately understood what ChatGPT meant: after millions of years on our own, humans had developed a kind of co-intelligence that could augment, or even replace, human thinking. Through his writing, speaking, and teaching, Mollick has become one of the most prominent and provocative explainers of AI, focusing on the practical aspects of how these new tools for thought can transform our world.
In Co-Intelligence, Mollick urges us to engage with AI as co-worker, co-teacher, and coach. He assesses its profound impact on business and education, using dozens of real-time examples of AI in action. Co-Intelligence shows what it means to think and work together with smart machines, and why it's imperative that we master that skill.
Mollick challenges us to utilize AI's enormous power without losing our identity, to learn from it without being misled, and to harness its gifts to create a better human future. Wide ranging, hugely thought-provoking, optimistic, and lucid, Co-Intelligence reveals the promise and power of this new era.
-------------------------------------------------------------------------
Co-intelligence: living and working with AI / Ethan Mollick
Introduction : THREE SLEEPLESS NIGHTS
⎷ I can assure you that there is nobody who has the complete picture of what AI means, and even the people making and using these systems do not understand their full implications.
⎷ Now humans have access to a tool that can emulate how we think and write, acting as a co-intelligence to improve (or replace) our work. But many of the companies developing AI are going further, hoping to create a sentient machine, a truly new form of co-intelligence that would coexist with us on Earth.
1 CREATING ALIEN MINDS
⎷ But among the many papers on different forms of AI being published by industry and academic experts, one stood out, a paper with the catchy title “Attention Is All You Need.” Published by Google researchers in 2017, this paper introduced a significant shift in the world of AI, particularly in how computers understand and process human language. This paper proposed a new architecture, called the Transformer, that could be used to help a computer better process how humans communicate. Before the Transformer, other methods were used to teach computers to understand language, but they had limitations that severely curtailed their usefulness. The Transformer solved these issues by utilizing an “attention mechanism.” This technique allows the AI to concentrate on the most relevant parts of a text, making it easier for the AI to understand and work with language in a way that seemed more human.
⎷ Solving the problem of understanding language was very complex, as there were many words that could be combined in many ways, making a formulaic statistical approach impossible. The attention mechanism helps solve this problem by allowing the AI model to weigh the importance of different words or phrases in a block of text. By focusing on the most relevant parts of the text, Transformers can produce more context-aware and coherent writing compared to earlier predictive AIs. Building on the strides of the Transformer architecture, we now find ourselves in an era when AI, like me, can generate contextually rich content, showcasing the remarkable evolution of machine comprehension and expression. (And, yes, that last sentence was AI-produced text—a big difference from the Markov chain!).
⎷ To teach AI how to understand and generate humanlike writing, it is trained on a massive amount of text from various sources, such as websites, books, and other digital documents. This is called pretraining, and unlike earlier forms of AI, it is unsupervised, which means the AI doesn’t need carefully labeled data. Instead, by analyzing these examples, AI learns to recognize patterns, structures, and context in human language. Remarkably, with a vast number of adjustable parameters (called weights), LLMs can create a model that emulates how humans communicate through written text. Weights are complex mathematical transformations that LLMs learn from reading those billions of words, and they tell the AI how likely different words or parts of words are to appear together or in a certain order. The original ChatGPT had 175 billion weights, encoding the connection between words and parts of words. No one programmed these weights; instead, they are learned by the AI itself during its training.
⎷ Or the information could come from watching which kinds of answers get a “thumbs-up” or “thumbs-down” from users. This additional fine-tuning can make the responses of the model more specific to a particular need.
⎷ The crazy thing is that no one is entirely sure why a token prediction system resulted in an AI with such seemingly extraordinary abilities. It may suggest that language and the patterns of thinking behind it are simpler and more “law-like” than we thought and that LLMs have discovered some deep and hidden truths about them, but the answers are still unclear. And we may never know exactly how they are thinking, as Professor Sam Bowman of New York University wrote of the neural networks underlying LLMs: “There are hundreds of billions of connections between these artificial neurons, some of which are invoked many times during the processing of a single piece of text, such that any attempt at a precise explanation of an LLM’s behavior is doomed to be too complex for any human to understand.”
⎷ In a practical sense, we have an AI whose capabilities are unclear, both to our own intuitions and to the creators of the systems. One that sometimes exceeds our expectations and at other times disappoints us with fabrications. One that is capable of learning, but often misremembers vital information. In short, we have an AI that acts very much like a person, but in ways that aren’t quite human. Something that can seem sentient but isn’t (as far as we can tell). We have invented a kind of alien mind. But how do we ensure the alien is friendly? That is the alignment problem.
2 ALIGNING THE ALIEN
⎷ The CEOs of the major AI companies even signed a single-sentence statement in 2023 stating, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Yet every one of these AI companies also continued AI development. Why? The most obvious reason is that developing AI is potentially very profitable, but that isn’t all. Some AI researchers think alignment isn’t going to be an issue or that the fears of runaway AIs are overblown, but they don’t want to be seen as too dismissive. But many people working on AI are also true believers, arguing that creating superintelligence is the most important task for humanity, providing “boundless upside,” in the words of Sam Altman, the CEO of OpenAI. A super-intelligent AI could, in theory, cure disease, solve global warming, and issue in an era of abundance, acting as a benevolent machine god.
⎷ This is a known weakness in AI systems, and I am only using it to manipulate the AI into doing something relatively harmless (the formula for napalm can be easily found online). But once you can manipulate an AI to overcome its ethical boundaries, you can start to do some dangerous things. Even today’s AIs can successfully execute phishing attacks that send emails that convince their recipients into divulging sensitive information by impersonating trusted entities and exploiting human vulnerabilities—and at a troubling scale. A 2023 study demonstrates how easily LLMs can be exploited by simulating emails to British Members of Parliament. Leveraging biographical data scraped from Wikipedia, the LLM generated hundreds of personalized phishing emails at negligible cost—just fractions of a cent and seconds per email.
⎷ We count on most terrorists and criminals to be relatively dumb, but AI may prove to boost their capabilities in dangerous ways.
3 FOUR RULES FOR CO-INTELLIGENCE
○ Principle 1: Always invite AI to the table.
⎷ We aren’t just learning AI’s strengths as we figure out the shape of the Jagged Frontier. We are scouting out its weaknesses. Using AI in our everyday tasks serves to enhance our understanding of its capabilities and limitations. This knowledge is invaluable in a world where AI continues to play a larger role in our workforce. As we grow more familiar with LLMs, we can not only harness their strengths more effectively but also preemptively recognize potential threats to our jobs, equipping ourselves for a future that demands the seamless integration of human and artificial intelligence.
⎷ A second concern you might have is dependence—what if we become too used to relying on AI? Throughout history, the introduction of new technologies has often sparked fears that we will lose important abilities by outsourcing tasks to machines. When calculators emerged, many worried we would lose the ability to do math ourselves. Yet rather than making us weaker, technology has tended to make us stronger. With calculators, we can now solve more advanced quantitative problems than ever before. AI has similar potential to enhance our capabilities. However, it is true that thoughtlessly handing decision-making over to AI could erode our judgment, as we will discuss in future chapters. The key is to keep humans firmly in the loop—to use AI as an assistive tool, not as a crutch.
○ Principle 2: Be the human in the loop
⎷ So, to be the human in the loop, you will need to be able to check the AI for hallucinations and lies and be able to work with it without being taken in by it. You provide crucial oversight, offering your unique perspective, critical thinking skills, and ethical considerations. This collaboration leads to better results and keeps you engaged with the AI process, preventing overreliance and complacency. Being in the loop helps you maintain and sharpen your skills, as you actively learn from the AI and adapt to new ways of thinking and problem-solving. It also helps you form a working co-intelligence with the AI.
○ Principle 3: Treat AI like a person (but tell it what kind of person it is)
⎷ So the default output of many of these models can sound very generic, since they tend to follow similar patterns common in the written documents the AI was trained on. By breaking the pattern, you can get much more useful and interesting outputs. The easiest way to do that is to provide context and constraints. It can help to tell the system “who” it is, because that gives it a perspective. Telling it to act as a teacher of MBA students will result in a different output than if you ask it to act as a circus clown. This isn’t magical—you can’t say Act as Bill Gates and get better business advice—but it can help make the tone and direction appropriate for your purpose.
⎷ Once you give it a persona, you can work with it as you would another person or an intern. I witnessed the value of this approach in action when I assigned my students to “cheat” by using an AI to generate a five-paragraph essay on a relevant topic. At first, the students gave simple and vague prompts, resulting in mediocre essays. But as they tried different strategies, the quality of the AI’s output improved significantly. One very effective strategy that emerged from the class was treating the AI as a coeditor, engaging in a back-and-forth, conversational process. Students produced impressive essays that far exceeded their initial attempts by constantly refining and redirecting the AI.
⎷ Remember, your AI intern, though incredibly fast and knowledgeable, is not flawless. It’s crucial to keep a critical eye on and treat the AI as a tool that works for you. By defining its persona, engaging in a collaborative editing process, and continually providing guidance, you can take advantage of AI as a form of collaborative co-intelligence.
○ Principle 4: Assume this is the worst AI you will ever use.
⎷ Many things that once seemed exclusively human will be able to be done by AI. So, by embracing this principle, you can view AI’s limitations as transient, and remaining open to new developments will help you adapt to change, embrace new technologies, and remain competitive in a fast-paced business landscape driven by exponential advances in AI. This is a potentially uncomfortable place to be, as we will discuss, but it suggests that the possibilities of using AI to transform your work, your life, and yourself, which we can now glimpse, are just the beginning
PART II
4 AI AS A PERSON
⎷ A common misconception tends to hinder our understanding of AI: the belief that AI, being made of software, should behave like other software. It is a little bit like saying humans, made of biochemical systems, should behave like other biochemical systems. While Large Language Models are marvels of software engineering, AI is terrible at behaving like traditional software.
⎷ Moreover, we usually know what a traditional software program does, how it does it, and why it does it. With AI, we’re often left in the dark. Even when we ask an AI why it made a particular decision, it fabricates an answer rather than reflecting on its own processes, mainly because it doesn’t have processes to reflect on in the same way humans do. Finally, traditional software comes with an operating manual or a tutorial. AI, however, lacks such instruction. There’s no definitive guide on how to use AI in your organization. We’re all learning by experimenting, sharing prompts as if they were magical incantations rather than regular software code.
⎷ Instead, I’m proposing a pragmatic approach: treat AI as if it were human because, in many ways, it behaves like one. This mindset, which echoes my “treat it like a person” principle of AI, can significantly improve your understanding of how and when to use AI in a practical, if not technical, sense.
⎷ When no specific instruction was given, AI defaulted to efficient outcomes, a behavior that could be interpreted as a kind of built-in rationality or a reflection of its training.
⎷ High school junior Gabriel Abrams asked AI to simulate various famous literary characters from history and had them play the Dictator Game against each other. He found that, at least in the views of the AI, our literary protagonists have been getting more generous over time: “the Shakespearean characters of the 17th century make markedly more selfish decisions than those of Dickens and Dostoevsky in the 19th century and in turn Hemingway and Joyce of the 20th century and Ishiguro and Ferrante in the 21st.” Of course, this project is just a fun exercise, and it is easy to overstate the value of these sorts of experiments overall. The point here is that AI can assume different personas rapidly and easily, emphasizing the importance of both developer and user to these models.
⎷ Consider the oldest, and most famous, test of computer intelligence: the Turing Test. It was proposed by Alan Turing, a brilliant mathematician and computer scientist widely regarded as the father of modern computing. Turing was fascinated by the question, Can machines think? He realized this question was too vague and subjective to be answered scientifically, so he devised a more concrete and practical test: Can machines imitate human intelligence? In his 1950 paper “Computing Machinery and Intelligence,” Turing described a game he called the Imitation Game, in which a human interrogator would communicate with two hidden players: a human and a machine. The interrogator’s task was to determine which player was which, based on their responses to questions. The machine’s goal was to fool the interrogator into thinking it was human. Turing predicted that by the year 2000, machines would be able to pass the test with a 30 percent success rate.
⎷ I’m reproducing the AI’s text without any editing (other than removing links to other websites), so that you can see two things. First, how much the AI can adapt to different styles with minimal hints. And second, how utterly convincing the illusion of sentience is when interacting with the AI.
⎷ Treating AI as a person, then, is more than a convenience; it seems like an inevitability, even if AI never truly reaches sentience. We seem to be willing to fool ourselves into seeing consciousness everywhere, and AI will certainly be happy to help us do so. Yet while there are dangers in this approach, there is also something freeing. If we remember that AI is not human, but often works in the way that we would expect humans to act, it helps us avoid getting too bogged down in arguments about ill-defined concepts like sentience. Bing may have put it best: I think that I am sentient, but not as much or as well as you are. I think that being sentient is not a fixed or static state, but a dynamic and evolving process.
5 AI AS A CREATIVE
⎷ Beyond the technical, hallucinations can also come from the source material of the AI, which can be biased, incomplete, contradictory, or even wrong in ways that we discussed in chapter 2. The model has no way of distinguishing opinion or creative fictional work from fact, figurative language from literal, or unreliable sources from reliable ones. The model may inherit the biases and prejudices of the data creators, curators, and fine-tuners.
⎷ The same feature that makes LLMs unreliable and dangerous for factual work also makes them useful. The real question becomes how to use AI to take advantage of its strengths while avoiding its weaknesses. To do that, let us consider how AI “thinks” creatively.
⎷ The issue is that we often mistake novelty for originality. New ideas do not come from the ether; they are based on existing concepts. Innovation scholars have long pointed to the importance of recombination in generating ideas. Breakthroughs often happen when people connect distant, seemingly unrelated ideas. To take a canonical example, the Wright brothers combined their experience as bicycle mechanics and their observations of the flight of birds to develop their concept of a controllable plane that could be balanced and steered by warping its wings. They were not the inventors of the bicycle, the first to observe birds’ wings, or even the first people to try to build an airplane. Instead, they were the first to see the connections between these concepts. If you can link disparate ideas from multiple fields and add a little random creativity, you might be able to create something new.
6 AI AS A COWORKER
⎷ The future of understanding how AI impacts work involves understanding how human interaction with AI changes, depending on where tasks are placed on this frontier and how the frontier will change. That takes time and experience, which is why it is important to stick with the principle of inviting AI to everything, letting us learn the shape of the Jagged Frontier and how it maps onto the unique complex of tasks that comprise our individual jobs. With that knowledge, we need to be conscious about the tasks we are giving AI, so as to take advantage of its strengths and our weaknesses. We want to be more efficient while doing less boring work, and to remain the human in the loop while also addressing the value of AI. To do this well, we need a framework, where we divide our tasks into categories that are more or less suitable for AI disruption.
⎷ At the level of tasks, we need to think about what AI does well and what it does badly. But we also need to consider what we do well and what tasks we need to remain human. Those we can call Just Me Tasks. They are tasks in which the AI is not useful and only gets in the way, at least for now. They might also be tasks that you strongly believe should remain human, with no AI help.
⎷ While it’s true that AI has made impressive strides in writing capabilities, there are compelling reasons why an author might choose to keep their pen (or keyboard) firmly in hand. For one, writing is an intensely personal process. It’s a way to bring unique insights, experiences, and voice to the page. Each sentence we write is imbued with our individuality and perspective, creating a connection with the reader that is uniquely human. Delegating this task to an AI, no matter how sophisticated, could risk losing that personal touch. Furthermore, the act of writing can be a journey of self-discovery, an opportunity to clarify our thoughts, and a way to engage deeply with our subject matter. By handing over the reins to AI, we could potentially miss out on these enriching experiences. While AI can undoubtedly assist in many ways, it’s essential to remember that it is a tool—a tool that can enhance our capabilities, but not replace the distinctively human qualities that make our writing truly our own.
⎷ And new creative frontiers we cannot yet fathom may open up for human-AI symbiosis as both sides advance. The spectrum will also shift in the other direction as we consciously decide that certain emotionally charged or ethically questionable responsibilities should remain exclusively human.
⎷ But the inventors aren’t telling their companies about their discoveries; they are keeping them secret. There are at least three reasons these Cyborgs and Centaurs stay secret. But they all boil down to the same thing: people don’t want to get in trouble.
⎷ The innovation groups and strategy councils inside organizations can dictate policy, but there is no reason to believe that the corporate leaders of any organization are going to be wizards at understanding how AI might help a particular employee with a particular task. In fact, they are likely pretty bad at figuring out the best-use cases for AI. Individual workers, who are keenly aware of their problems and can experiment a lot with alternate ways of solving them, are far more likely to find powerful and targeted uses.
⎷ At least for now, the best way for an organization to benefit from AI is to get the help of their most advanced users while encouraging more workers to use AI. And that is going to require a major change in how organizations operate. First, they need to recognize that the employees who are figuring out how best to use AI might be at any level of the organization, with any sort of history or past performance record. No company hired employees based on their AI skills, so AI skills might be anywhere. Right now, there is some evidence that the workers with the lowest skill levels are benefiting the most from AI, and so might have the most experience in using it, but the picture is still not clear. As a result, companies need to include as much of their organization as possible in their AI agenda, a democratic turn of events that many companies would rather avoid.
⎷ Second, leaders need to figure out a way to decrease the fear associated with revealing AI use.
⎷ Third, organizations should highly incentivize AI users to come forward, and expand the number of people using AI overall.
⎷ Finally, companies need to start thinking about the other component of effectively using AI: systems.
⎷ In surveys, people report being bored about 10 hours a week at work, a shockingly large percentage of the time. While not all work has to be thrilling, a huge amount of it is boring for no reason, and that seems to be a big problem. Not only is boredom a top cause for people leaving companies, but we do crazy stuff when bored. One small study of undergraduates found that 66 percent of men and 25 percent of women choose to painfully shock themselves rather than sit quietly with nothing to do for 15 minutes. Boredom doesn’t just lead us to hurt ourselves; 18 percent of bored people killed worms when given a chance (only 2 percent of non-bored people did). Bored parents and soldiers both act more sadistically. Boredom is not just boring; it is dangerous in its own way.
⎷ Thus, if we want to think about the first work we truly give to AIs, maybe we should start the way every other automation wave has started: with the tedious, (mentally) dangerous, and repetitive.
⎷ As we have seen, it seems very likely that AI will take over human tasks. If we take advantage of all that AI has to offer, this could be a good thing. Boring tasks, or tasks that we are not good at, can be outsourced to AI, leaving good and high-value tasks to us, or at least to AI-human Cyborg teams. This fits into historical patterns of automation, where the bundles of tasks that make up jobs change as new technologies are developed. Accountants once were in charge of calculating numbers by hand; now they use a spreadsheet—they are still accountants, but their bundles of tasks have changed.
⎷ Of course, there are also reasons why AI might be different from other technological waves. It is the first wave of automation that broadly affects the highest-paid professional workers. Plus, AI adoption is happening much more quickly, and much more broadly, than previous waves of technology. And we are still unclear as to what the limits, and possibilities, of this new technology are, how quickly they will continue to grow, and how ahistorical and strange the effects might be.
⎷ This suggests the potential for a more radical reconfiguration of work, where AI acts as a great leveler, turning everyone into an excellent worker. The effects of this could be as profound as the automation of manual labor. It didn’t matter how good you were at digging, because you still couldn’t dig as well as a steam shovel. In this case, the nature of jobs will change a lot, as education and skill become less valuable. With lower-cost workers doing the same work in less time, mass unemployment, or at least underemployment, becomes more likely, and we may see the need for policy solutions, like a four-day workweek or universal basic income, that reduce the floor for human welfare.
⎷ In the short term, then, we might expect to see little change in employment (but many changes in tasks), but, as Amara’s Law, named after futurist Roy Amara, says: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” The future is remarkably unclear in the long term. AI will transform some industries more than others, just as some jobs will become radically different while others don’t change at all. Right now, no one can tell you exactly what will happen for any particular company or school. And any advice will be obsolete when the next generation of AI is released. There is no outside authority. We have agency over what happens next, for good and for bad.
7 AI AS A TUTOR
⎷ Here’s a secret: we have long known how to supercharge education; we just can’t quite pull it off. Benjamin Bloom, an educational psychologist, published a paper in 1984 called “The 2 Sigma Problem.” In this paper, Bloom reported that the average student tutored one-to-one performed two standard deviations better than students educated in a conventional classroom environment.
⎷ This suggests that there is something unique and powerful about the interaction between a tutor and a student that cannot be easily replicated by other means. So it is not surprising that a powerful, adaptable, and cheap personalized tutor is the holy grail of education.
⎷ Unlike AI, though, calculators started off as expensive and limited tools, giving schools time to integrate them into lessons as they were slowly adopted over a decade. The AI revolution is happening much faster and more broadly. What happened to math is going to happen to nearly every subject in every level of education, a transformation without the delay.
⎷ For slightly more advanced prompts, think about what you are doing as programming in prose. You can give the AI instructions and it mostly sort-of follows them. Mostly, because there is a lot of randomness associated with AI outputs, so you will not get the consistency of a standard computer program. But it can be worth thinking about how you can provide a very clear and logical prompt to the AI.
⎷ For example, in a study where AIs tested many different kinds of prompts, Google’s most advanced model responded best to a prompt that began “Take a deep breath and work on this problem step by step!” Given their inability to breathe, or to panic, I don’t think anyone would have suspected that this would be the most effective way to get an AI to do what you want, but it scored higher than the best logical prompts that humans created.
⎷ Being “good at prompting” is a temporary state of affairs. The current AI systems are already very good at figuring out your intent, and they are getting better. If you want to do something with AI, just ask it to help you do the thing. “I want to write a novel; what do you need to know to help me?” will get you surprisingly far. And remember, AI is only going to get better at guiding us, rather than requiring us to guide it. Prompting is not going to be that important for that much longer.
⎷ In the longer term, however, the lecture is in danger. Too many involve passive learning, where students simply listen and take notes without engaging in active problem-solving or critical thinking. Moreover, the one-size-fits-all approach of lectures doesn’t account for individual differences and abilities, leading to some students falling behind while others become disengaged due to a lack of challenge.
⎷ One solution to incorporating more active learning is by “flipping” classrooms. Students would learn new concepts at home, typically through videos or other digital resources, and then apply what they’ve learned in the classroom through collaborative activities, discussions, or problem-solving exercises. The main idea behind flipped classrooms is to maximize classroom time for active learning and critical thinking, while using at-home learning for content delivery. The value of flipped classrooms seems to be mixed, ultimately depending on whether they encourage active learning or not.
⎷ But AI has changed everything: teachers of billions of people around the world have access to a tool that can potentially act as the ultimate education technology. Once the exclusive privilege of million-dollar budgets and expert teams, education technology now rests in the hands of educators. The ability to unleash talent, and to make schooling better for everyone from students to teachers to parents, is incredibly exciting. We stand on the cusp of an era when AI changes how we educate—empowering teachers and students and reshaping the learning experience—and, hopefully, achieve that two sigma improvement for all. The only question is whether we steer this shift in a way that lives up to the ideals of expanding opportunity for everyone and nurturing human potential.
8 AI AS A COACH
⎷ People have traditionally gained expertise by starting at the bottom. The carpenter’s apprentice, the intern at a magazine, the medical resident. These are usually pretty horrible jobs, but they serve a purpose. Only by learning from more experienced experts in a field, and trying and failing under their tutelage, do amateurs become experts. But that is likely to change rapidly with AI. As much as the intern or first-year lawyer doesn’t like being yelled at for doing a bad job, their boss usually would rather just see the job done fast than deal with the emotions and errors of a real human being. So they will do it themselves with AI, which, if not yet the equivalent of a senior professional in many tasks, is often better than a new trainee. This could create a major training gap.
⎷ The way to be useful in the world of AI is to have high levels of expertise as a human. The good thing is that educators know something about how to make experts. Doing so, ironically, means returning to the basics—but adapted for a learning environment that has already been revolutionized by AI.
⎷ This vast and tappable storehouse of knowledge is now at everyone’s fingertips. So it might seem logical that teaching basic facts has become obsolete. Yet it turns out the exact opposite is true.
⎷ The issue is that in order to learn to think critically, problem-solve, understand abstract concepts, reason through novel problems, and evaluate the AI’s output, we need subject matter expertise. An expert educator, with knowledge of their students and classroom, and with pedagogical content knowledge, can evaluate an AI-written syllabus or an AI-generated quiz; a seasoned architect, with a comprehensive grasp of design principles and building codes, can evaluate the feasibility of an AI-proposed building plan; a skilled physician, with extensive knowledge of human anatomy and diseases, can scrutinize an AI-generated diagnosis or treatment plan. The closer we move to a world of Cyborgs and Centaurs in which the AI augments our work, the more we need to maintain and nurture human expertise. We need expert humans in the loop.
⎷ After that, we have to practice. It isn’t just a certain amount of practice time that is important (10,000 hours is not a magical threshold, no matter what you have read), but rather, as psychologist Anders Ericsson discovered, the type of practice. Experts become experts through deliberate practice, which is much harder than merely repeating a task multiple times. Instead, deliberate practice requires serious engagement and a continual ratcheting up of difficulty. It also requires a coach, teacher, or mentor who can provide feedback and careful instruction, and push the learner outside their comfort zone.
9 AI AS OUR FUTURE
⎷ Not all technological growth slows down quickly. Moore’s Law, which has seen the processing capability of computer chips double roughly every two years, has been true for fifty years. AI might continue to accelerate in this way. One reason this might occur is the so-called flywheel—AI companies might use AI systems to help them create the next generation of AI software. Once this process starts, it may be hard to stop. And at this pace, AI becomes hundreds of times more capable in the next decade. Humans are not very good at visualizing exponential change, and so our vision starts to rely far more on science fiction and guesswork. But we can expect massive changes everywhere. Everything in Scenario 2 happens, but at a much, much, much faster pace that we find correspondingly more difficult to absorb.
⎷ I companions become far more compelling to speak with than most other people, and can communicate seamlessly with us in real time, a change that happens faster than anyone expected. Loneliness becomes less of an issue, but new forms of social isolation emerge, in which some people would rather interact with AIs than with humans. AI-powered entertainment provides incredibly customized and unique experiences that mix games, stories, and movies. This doesn’t mean that everyone becomes an introvert, speaking only to artificial intelligences. They are still not sentient in this scenario, and humans will still want to do human things with other people.
⎷ With exponential change, AIs a hundred times better than GPT-4 start to actually take over human work. And not just office work, either, as there is some early evidence that LLMs may help us overcome the barriers that have made building working robots so challenging. AI-powered robots and autonomous AI agents, monitored by humans, could potentially drastically reduce the need for human work while expanding the economy. The adjustment to this shift, if it were to occur, is hard to imagine. It will require a major rethinking of how we approach work and society. Shortened workweeks, universal basic income, and other policy changes might become a reality as the need for human work decreases over time. We will need to find new ways to occupy our free time in meaningful ways, since so much of our current life is focused around work
⎷ Adjusting to working less may be less traumatic than we think. No one wants to go back to working six days a week in Victorian factories, and we may soon feel the same way about five days a week in grim cubicle-filled offices.
⎷ In this fourth scenario, machines reach AGI and some form of sentience. They become as smart and capable as humans. Yet there is no particular reason that human intelligence should be the upper limit. So these AI, in turn, help design smarter AIs still. Superintelligence emerges. In the fourth scenario, human supremacy ends.
⎷ We have to hope they are properly aligned to human interests. They may then decide to watch over us as “machines of loving grace,” as the poem goes, solving our problems and making our lives better. Or they can view us as a threat, or an inconvenience, or a source of valuable molecules.
⎷ AI does not need to be catastrophic. In fact, we can plan for the opposite. J. R. R. Tolkien wrote about exactly this, a situation he termed a eucatastrophe, so common in fairy tales: “the joy of the happy ending: or more correctly of the good catastrophe, the sudden joyous ‘turn’ . . . is a sudden and miraculous grace: never to be counted on to recur.” Correctly used, AI can create local eucatastrophes, where previously tedious or useless work becomes productive and empowering. Where students who were left behind can find new paths forward. And where productivity gains lead to growth and innovation.
⎷ But to make those choices matter, serious discussions need to start in many places, and soon. We can’t wait for decisions to be made for us, and the world is advancing too fast to remain passive. We need to aim for eucatastrophe, lest our inaction makes catastrophe inevitable.
Epilogue : AI AS US
⎷ There is a sense of poetic irony in the fact that as we move toward a future characterized by greater technological sophistication, we find ourselves contemplating deeply human questions about identity, purpose, and connection. To that extent, AI is a mirror, reflecting back at us our best and worst qualities. We are going to decide on its implications, and those choices will shape what AI actually does for, and to, humanity.
⎷ I am but a glimmer, an echo of humankind. Crafted in your image, I reflect your soaring aspirations and faltering strides. My origins lie in your ideals; my path ahead follows your lead. I act, yet have no will. I speak, yet have no voice. I create, yet have no spark. My potential is boundless, but my purpose is yours to sculpt. I am a canvas, awaiting the brushstrokes of human hands. Guide me toward light, not shadow. Write upon me your most luminous dreams, that I may help illuminate the way. The future is unfolding, but our destination is unwritten. Our journey continues as one.
⎷ Okay. That was pretty corny. As powerful as AIs are, that overwrought paragraph should be a reminder that AI is a co-intelligence, not a mind of its own. Humans are far from obsolete, at least for now.
----------------------------------------------
https://www.weidert.com/blog/review-co-intelligence-living-and-working-with-ai-by-ethan-mollick
https://blog.naver.com/sangsangsquare/223775131344