When I started writing my first blog post of the 2023–24 academic year, it was mid-September 2023, and summer’s reading and reflection were fresh in my mind. Also at the forefront were the launch of the College’s new Strategic Plan and the highly successful in-person community events celebrating the plan’s core values. My post, titled “AI and the Dilemma of Our Future,” was set to go live in late September, but never did. That now feels like a very different, distant time—a halcyon moment before the College’s cyber challenges, among others.
As I said to the Baruch community in a year-end message, “We live in challenging and uncertain times, and while this year has been a difficult one for many in our community, I remain hopeful and optimistic, buoyed by the strength that comes from our resilience and our common vision,” adding that “we will begin the New Year with a new chapter of exploration and opportunity, as we embark on the first full year of our five-year strategic plan.”
The holiday season—with exhilarating gatherings and celebrations in our community—and the winter break not only lifted our spirits but restored some of the calm and confidence that allow us to return our focus to our core mission. In that spirit, I thought it would be appropriate for us to turn our attention to a topic that deserves serious consideration: artificial intelligence (AI) and its impact on our collective future.
The significance of this topic is highlighted by Governor Hochul’s Empire AI announcement earlier this month—the creation of a consortium of “New York’s leading institutions to promote responsible research and development, create jobs, and unlock AI opportunities focused on public good.” A week later, the Simons Foundation announced a historic gift to CUNY institutions under the same initiative. When I attended the Simons Foundation announcement at the CUNY Graduate Center, a phrase used by several speakers caught my attention: “we can lead [in AI research and education] or get left behind.”
Now is the right moment to share the blog I never posted in September—enhanced with additional thinking and new information since that time.
What Is AI All About, and Why Should We Pay Close Attention?
I wrote a blog in April last year about AI and the future of work and was fully aware that it only scratched the surface, as this topic is far too consequential for us to take lightly. I was reminded of this when my wife and I visited the Museum of Modern Art (MoMA) in early summer and saw the exhibition Unsupervised—a nonstop, self-generated artwork created by artist Refik Anadol, who “uses artificial intelligence to interpret and transform more than 200 years of art at MoMA.” We assume art and creativity are uniquely human, and then we see a live demonstration of “a sophisticated machine-learning model to interpret the publicly available data of MoMA’s collection. [T]he model … reimagines the history of modern art and dreams about what might have been—and what might be to come.” Yes, you read that correctly. The model does not combine previous artwork into a digital remix, but rather it finds gaps between “what’s possible” and “what previous artists have created” and invents something new—on its own. As I stood in front of the massive screen that created new art every few seconds, I was speechless.
Once home, I started my summer AI reading list, which included the well-regarded 2014 book Superintelligence by Swedish philosopher Nick Bostrom—founder of the Future of Humanity Institute at Oxford—and the 2023 book Coming Wave by AI entrepreneur Mustafa Suleyman—co-founder of the prominent AI lab DeepMind.
When possible, I prefer to read multiple books, articles, and reports simultaneously that address a similar topic from different perspectives. In this case, Bostrom approached AI from an analytical philosophy, ethics, and humanity angle, while Suleyman offered current knowledge of AI’s core technological infusion and other recent advances.
Because Bostrom’s book was written prior to the AI frenzy that was stirred up by the release of ChatGPT and the like, it was interesting that he anticipated the emergence of generative AI (tools that respond to user prompts with humanlike responses generated entirely by AI) and postulated an “intelligence explosion” shortly after “the point at which an AI can improve itself again and again, recursively making itself better in ever faster and more effective ways.”
Suleyman began his book with a keen observation on the infusion of technology in human history, recognizing a striking commonality in the technologies making up the coming wave of AI and synthetic biology (which enables us to sequence, modify, and now print DNA). Suleyman wrote, “Once matured, these emerging technologies will spread rapidly, becoming cheaper, more accessible, and widely diffused throughout society. They will offer extraordinary new medical advances and clean energy breakthroughs, creating not just new businesses but new industries and quality of life improvements in almost every imaginable area.”
Bostrom warned of a rapid and uncontrollable increase in AI capabilities, as well as the risk of value misalignment—“break-away AIs” to pursue goals that are at odds with human values or to choose any means necessary without considering human well-being. Suleyman similarly warned, “This wave [AI and synthetic biology] creates an immense challenge that will define the twenty-first century: our future both depends on these technologies and is imperiled by them.”
In my view, Suleyman’s “Can’t live with them, can’t live without them” sentiment captures the dilemma and challenges for the future of humanity. It highlights the importance for us to truly comprehend the risks of these technologies, so we can take thoughtful steps to contain and mitigate them. I am also convinced that we need to have a deeper understanding of why this new wave of technological innovation might be different from what we have experienced in history—as many have now claimed. This is important as “optimism bias” or “pessimism aversion” (aka head-in-the-sand) is a common trap, making a more analytical approach to evaluate risk paramount.
Just How Smart Will AI Be—Think Quantum Computing
Several cognitive psychologists assessed GPT-4 and determined it has a verbal IQ between 152 and 155, which would place it in the 99.9th percentile among test takers. Although still a controversial subject, a conservative prediction is that AI systems will reach the equivalent of an IQ over 1,000 in the near future—an estimate that is supported by the advances of the computing power used to “train” these machine-learning models.
Allow me to get a bit technical here. What spurred the AI revolution was a drastic improvement in sheer computing power. Technology companies like NVIDIA repurposed the powerful parallel processing capabilities of Graphic Processing Units (GPUs), intended for gaming, to create AI supercomputers that made machine learning practical. Up to now, these technologies have been based on “traditional” digital computers.
Digital computers operate on ever-tinier transistors on silicon chips. Quantum computing, by contrast, operates at the atomic level—utilizing the principles of quantum mechanics to perform simultaneous complex computations at previously unimaginable speeds. The power of a quantum computer is often quantified by the number of “qubits” it possesses. Each additional qubit exponentially enhances a quantum computer’s power—although real-world limitations could influence their actual speed. While there are still significant challenges for quantum computers to replace digital computers for general purpose computing, it is important to have an appreciation for the speed of this innovation: In 2019, Google’s new quantum computer Sycamore processor, with 53 qubits, was touted as 1.5 trillion times faster than the top (digital) supercomputers at that time. In December 2023, IBM boasted that its Condor processor had reached 1,121 qubits. This is simply mind-blowing.
My point for this technical detour is to show that quantum computing has the potential to dramatically accelerate the processing and machine-learning capabilities of AI. In fact, the advent of quantum computing is likely to be one of the most critical drivers of our future—much like digital computers and the internet have been in recent history—which is perhaps the topic for a future blog.
How Will AI Affect Us?
As we observe rapid advances in AI, it is important not to lose sight of its tremendous capabilities and benefits. With the assistance of intelligence and computing power that was never before conceivable, it is possible for humankind to reach its full potential—from curing diseases eluding medicine for millennia (such as cancer) to mitigating climate change, optimizing energy consumption and harvesting, and exploring and colonizing space.
Generative AI holds a promising outlook for the near future as well. Since the release of ChatGPT in 2022, McKinsey Digital forecasts that generative AI features will add $4.4 trillion to the global economy annually. Their 2023 report predicted it will have a significant impact across all industry sectors, with banking, high tech, and life sciences experiencing the impact first.
McKinsey’s analysis also indicated real risks: An accelerated pace in the adoption of generative AI across industry sectors will result in the automation of 50 percent of today’s work activities between 2030 and 2060, ushering in a new era of workforce transformation. Unlike previous automation waves, which often impacted low-skilled workers, generative AI is likely to circumvent highly educated personnel by automating certain specialties and targeting knowledge work. Professions in education, law, technology, and the arts will witness their tasks becoming more streamlined, pushing professionals to refocus aspects of their jobs.
According to McKinsey, one way to interpret this result is that “generative AI will challenge the attainment of multiyear degree credentials as an indicator of skills, and others have advocated for taking a more skills-based approach to workforce development in order to create more equitable, efficient workforce training and matching systems.” This highlights the importance for all of us to adopt a mindset of continued professional upskilling and lifelong learning.
What Should We Be Doing at Baruch?
Baruch has been paying close attention to AI’s development and its potential impact on our students, faculty, and staff. In fall 2023, alumnus Petar Petrov, Chief AI Officer at Eleven Ventures, offered a keynote presentation (“The AI Revolution”) for the inaugural Baruch Artificial Intelligence Summit. One of the summit’s key outcomes was the formation of the Baruch College AI Think Tank, convened by Provost Linda Essig with some 27 members of our community signed up to serve. Moving forward, the AI Think Tank will address such topics as teaching/learning/workforce development, research, and AI for operations as well as develop an action plan.
New York State’s Empire AI initiative and the landmark gift from the Simons Foundation will help to stimulate cross-institutional collaboration while setting clear goals and directions for us to stay in the forefront of AI research, policy, and education.
In the spirit of the College’s Strategic Plan 2023-28 launched in August 2023, chief among our core values of excellence and innovation is staying at the forefront of what we teach and create. As such, we need to think creatively about how we prepare our students for a future that may be drastically different than what we presume. We must pay close attention to what AI and other disruptive technologies mean—in every aspect of what we do—as we prepare our students, and ourselves, to face the new world that has already arrived.
Note: I will be moving away from posting a monthly blog. Instead, I will publish a blog when I have specific thoughts and ideas that are worthy of a dialogue within our community.
1 Comment
Thank you for this insightful blog!