Full Circle Part 3/12: Artificial Intelligence
You knew it was coming.
In just a few years, Artificial Intelligence (AI) has become a teacher, coder, customer service representative, data analyst, author, and so much more - with new capabilities constantly emerging.
Every day, tech giants seem to pour billions into AI development, while many educators remain cautious about its role in academia. Where should you stand? When should you use these tools? How can you use them effectively and ethically?
In Edition #3, we're exploring artificial intelligence - not just the history behind the industry and the tools themselves, but how to think about them as part of your academic toolkit.
Let’s get one thing clear: AI is by no means a new concept:
700 BCE: The first concept of a robot, Talos, was a bronze man as described in Greek Mythology.
1909: E. M. Forster released the short story “The Machine Stops", which tells of a dystopian world where humans live under the crust of the Earth, and worship and rely on machines and technology.
1950: Alan Turing proposed the Turing Test questioning whether machines were capable of thinking.
1956: A small group of scientists at Dartmouth University officially coined the term “Artificial Intelligence.”
What even is AI? Today, there are three types of AI. The first is Artificial Narrow Intelligence (ANI), which lets computers excel at specific, limited tasks but can't apply its skills elsewhere. Think of it like a computer that's amazing at finding the best driving routes, but that's all it can do. A classic example is IBM's Deep Blue, which beat Chess Grandmaster Garry Kasparov in 1997.
Enter Generative Artificial Intelligence (GenAI), which powers tools like ChatGPT and Claude. These systems, called Large Language Models (LLMs), can handle many tasks from writing code to writing essays. While more advanced than ANI, GenAI still has clear limits.
GenAI uses something called Transformer Architecture and works in two main phases. The first is Training, where it reads massive amounts of data to learn language patterns and how different concepts connect to each other. During training, the model develops parameters - think of these as billions of neural connections that help it process information. The second phase is generation - the phase you are familiar with. You input a prompt, and the model uses what it learned during training to create a response.
Artificial General Intelligence (AGI), the theoretical next step, would be AI that matches or surpasses human intelligence in every way. Unlike GenAI today, AGI would easily apply knowledge across different fields, learn from experience, truly understand context, and even improve itself.
From a regulatory perspective, AI is the Wild Wild West. One big issue is how AI companies gather the massive amounts of data needed to train their systems. The New York Times and other media companies are suing OpenAI for using their copyrighted content without permission or payment. But data problems don't stop there - when you use an AI system, your inputs become part of its training data. Adding to the complexity, there's ongoing debate about who owns AI-generated content - the person who prompted it, or the AI company that created the model.
AI experts are also concerned about the environmental cost of these systems. As AI models grow more complex, they need enormous computing power to run. For perspective, GPT-3 had ~175 billion parameters, while GPT-4 jumped to over 1.5 trillion. These systems require massive amounts of energy to run, lots of water for cooling, and rare minerals for hardware - these resources are already scarce and often obtained through environmentally harmful methods.
So, what are the potential positive impacts of AI? Doctors can spot diseases earlier and find the right treatments for each patient. Farmers can use it to grow more food with less water and fewer chemicals. The applications of AI are far and wide across every industry, and society has yet to understand its true impact.
Let’s be real, companies like OpenAI and Anthropic aren’t building these LLMs so you can cheat on your homework. How then, can you use these fantastic tools to your advantage as a student?
Our approach this week is to first provide you with a list of resources explaining how they differ from each other, and then provide you with a list of ways to use them to benefit your learning.
AI Tools We Recommend:
Claude - Developed by Anthropic and publicly released in March 2023, Claude represents a new generation of AI assistants focused on thoughtful, nuanced interactions. It is currently the main competitor of ChatGPT and completes tasks very similarly.
Perplexity - Launched in late 2022, Perplexity reimagined how we search for information by combining AI-powered conversations with real-time web searches. Think of it as having a research librarian who not only answers your questions but shows you exactly where the information comes from.
Native AI Assistants - Beginning with Apple's Siri in 2011, followed by Google Assistant and Microsoft's Outlook Copilot, these AI tools are deeply integrated into the software we use daily. What makes these native AI assistants special is their seamless integration with existing workflows and their ability to access and work with your personal data while maintaining privacy.
ChatGPT - Surely you know this one already, but we figured it would be a disservice not to include it.
How to Use Them:
As a study buddy - Use AI as your friend who's always ready to quiz you, discuss topics, or bounce ideas around. It's like having a study partner who never gets tired or judgmental, and is available whenever you need to work through concepts.
To create practice tests for you - When you're preparing for an exam, AI can generate questions that target exactly what you're studying, adapting to your level of understanding.
To explain concepts you don't understand - Sometimes we need concepts explained in different ways before they click. AI can break down complex ideas using analogies, simpler terms, or real-world examples until you find the explanation that makes sense to you.
To create visuals to help organize complex topics - AI can help you structure information into diagrams, flowcharts, or mind maps that make complicated subjects easier to grasp and remember.
AI should never be used for doing actual work or generating what are supposed to be original ideas. If you know how to do something already, go for it. If you want to use it to help generate an outline for a paper you’re going to write, go for it. But don’t get yourself into trouble, or even worse, trick yourself into uselessness by relying on it every time you should be using your brain.
There’s the tools, now here’s how we’re thinking about it:
Jon: Perhaps the best way to think about Artificial Intelligence and more importantly the way it will affect the world is as a realignment of what skills are really valued. Do we go down the path of allowing AI to diminish the connections we form with others? Or do we instead allow it to reprioritize those connections?
Sam: This past summer I interned for a Fortune 150 company, and very excitedly shared in a meeting how I leveraged our new, internal AI to assist me in completing a task. While I recognize AI is flawed, I 100% see it as a valuable resource and tool - people I interact with tend to share this sentiment. The following day my amazing manager helped me understand that while we both saw AI as an invaluable tool, many in the corporate world view it as a threat to their job security. This was my first encounter with AI anxiety in a professional setting, and it transformed how I approach the topic.
Listen, we know this has been dense. At the end of the day, if you have one take away, we hope it’s that these resources are incredibly powerful tools, but there is nothing more powerful than your brain. Overreliance on artificial resources is another surefire way to guarantee the obsolescence discussed in Edition #1.
If interested in reading more, you may find the links below useful. We look forward to seeing you next week!
Stargate Project [article]
The Machine Stops [short story]
Chip War [book]