As artificial general intelligence (AGI) rapidly advances, the conversation is shifting from philosophical debate to one of practical relevance, with immense opportunity to transform global businesses and human potential.
Turing’s AGI Icons event series brings together AI innovators to discuss practical and responsible advancements of AGI solutions. On July 24, Turing hosted our second AGI Icons event at SHACK15, San Francisco’s exclusive hub for entrepreneurs and tech innovators. Moderated by Anita Ramaswamy, financial columnist at The Information, I sat down with Quora CEO, Adam D’Angelo to discuss the road to AGI and share insights into development timelines, real-world applications, and principles for responsible deployment.
The Road from AI to AGI
The “north star” that drives AI research is the pursuit of human-level “intelligence.” What separates AGI from standard AI is its progression past narrow functionality toward greater generality (breadth) and performance (depth), even exceeding human capabilities.
This is “the road to AGI,” where AI progresses to more autonomous systems, superior reasoning, enhanced capabilities, and improved functionality. These progressions are broken down into five taxonomic levels:
- Level 0: No AI – Simple tools like calculators
- Level 1: Emerging AGI – Current LLMs like ChatGPT
- Level 2: Competent AGI – AI systems that match skilled adults on specific tasks
- Level 3: Expert AGI – AI systems at the 90th percentile of skilled adults
- Level 4: Virtuoso AGI – AI systems at the 99th percentile
- Level 5: Superhuman AGI – AI systems that outperform all humans
During our discussion, Adam defined the concept of AGI as, “software that can do everything a human can do.” He envisions a future where AI improves itself, eventually taking over complex human-tasks handled by machine learning researchers.
Taking this a step further, I compared my views on AGI to that of an “artificial brain” capable of diverse tasks like “machine translation, complex queries, and coding.” That’s the distinction between AGI and more predictive AI and narrow forms of ML that came before it. It feels like emergent behavior.
Realistic Development Timelines on the Road to AGI
Just like on a road trip, the top-of-mind question about AGI is, “Are we there yet?” The short answer is no, but as AI research accelerates the right question to ask is, “How can we balance AGI ambition with realistic expectations?”
Adam highlighted that increased automation from AGI will shift human roles rather than eliminate them, leading to faster economic growth and more efficient productivity. “As this technology gets more powerful, we’ll get to a point where 90% of what people are doing today is automated, but everyone will have shifted into other things.”
Currently, much of the world economy is constrained by the number of people available to work. Once we achieve AGI, we can grow the economy at a much faster rate than is possible today.
We can’t give a definitive timeline for when true AGI will be realized, but Adam and I cited several instances of AI advancements making way for future AGI progressions. For instance, Turing’s experiments with AI developer tools showed a 33% increase in developer productivity, hinting at even greater potential.
Real-World Applications and Effects
One of the most promising applications of AGI lies in the field of software development. Large language models (LLMs), a precursor to AGI, are already being used to enhance software development and improve code quality. I see this era of AI as closer to biology than physics, where all types of knowledge work will improve. There’s going to be so much more productivity unlocked from and for humanity.
My perspective comes from experience, where I’ve witnessed a 10-fold personal productivity increase when using LLMs and AI developer tools. We’re also using AI at Turing to evaluate technical talent and match the right software engineers and PhD-level domain experts to the right jobs.
What I’m seeing in the LLM training space, for example, is that trainers leverage these models to enhance developer productivity and accelerate project timelines. By automating routine coding tasks and providing intelligent code suggestions, LLMs free up developers to focus on more strategic and creative aspects of their work.
Adam closed out, “”LLMs won’t write all the code, but understanding software fundamentals remains crucial. Calculators didn’t eliminate the need to learn arithmetic.” He added, “Developers become more valuable when using these models. The presence of LLMs is a positive for developer jobs and there’s going to be a lot of gains for developers.”
We’re entering a golden era of software development where one software engineer can be 10x more productive, create more, and benefit the world.
Technical and Governance Challenges
Despite the promising potential of AGI, challenges must be addressed. Robust evaluation processes and regulatory frameworks are necessary to balance AGI innovation with public safety.
Adam emphasized the need for thorough testing and sandboxing to limit worst-case scenarios. “You want to have some kind of robust evaluation process… and get that distribution that you’re testing against to be as close to the real world usage as possible.”
And I agree. The bottleneck for AGI progress is now human intelligence, rather than computing power or data. Human expertise is crucial for fine-tuning and customizing AI models, which is why Turing focuses on sourcing and matching top-tier tech professionals to balance models with human intelligence.
We must address AGI challenges head-on by focusing on capabilities over processes, generality and performance, and potential.
Perspectives on Challenges: Improving Human-AGI Interactions
Some of the best-practices to address AGI challenges include:
- Focus on capabilities or “what AGI can do” rather than processes or “how it does it”.
- Balance generality and performance as essential components of AGI.
- Focus on cognitive/metacognitive tasks and learning abilities over physical tasks/outputs.
- Measure AGI by its potential and capabilities.
- Focus on ecological validity by aligning benchmarks with real-world tasks people value.
- Remember the path to AGI isn’t a single endpoint, it’s an iterative process.
Adding to these best-practices, Adam and I stressed the importance of improving human-AGI interactions. Adam emphasized the value of learning how and when to use these models, viewing them as powerful learning tools that can quickly teach any subdomain of programming while emphasizing the importance of understanding the fundamentals.
Similarly, I suggest that making every human a power user of LLMs could significantly enhance productivity and understanding across various fields. LLMs can make complex information accessible to all, enhancing productivity across various fields. But it requires a phased, iterative approach: starting with AI copilots assisting humans, then moving to agents with human supervision, and eventually achieving fully autonomous agents in well-evaluated tasks.
With that, post-training differentiation is critical, involving supervised fine-tuning (SFT) and leveraging human intelligence to build custom models. Companies that can source and match trainers, engineers, and others will speed up their fine-tuning and custom engineering capabilities. Collaborating with leading companies like OpenAI and Anthropic, are also key to applying these models across diverse industries.
Principles of Responsible AGI Development
“AGI development must be responsible and ethical, ensuring safety and transparency while fostering innovation.” – Adam D’Angelo
Responsible development of AGI requires adhering to several core principles:
- Safety and Security: Ensuring AGI systems are reliable and resistant to misuse, especially as models scale to accommodate new data inputs or algorithms.
- Transparency: Being realistic about AGI’s capabilities, limitations, and “how it works”.
- Ethical Considerations: Tackling fairness, bias, and how AGI will impact employment and other socioeconomic factors .
- Regulation: Working with governments and other organizations to develop frameworks balancing progress with public safety.
- Benchmarking: Future benchmarks must quantify AGI behavior and capabilities against ethical considerations and taxonomy levels.
Conclusion: Focus on the path to AGI, not a single endpoint
The road to AGI is complex, but each stop along the way is important to the journey. By understanding AGI’s iterative improvements—along with its implications—people and businesses will be able to responsibly adopt this evolving technology. This is the crux of responsible AGI development, where real world interactivity informs how we navigate this new frontier.
Credit: Source link