Phorum Technology Conference Tackles AI-Driven Volatility

12 minute read | 22 May 2023
Categories:

By Michael N. Price

As the recent advances in artificial intelligence continue to revolutionize the way individuals seek out information and generate digital content online, companies of all sizes find themselves grappling with how those innovations may impact, or even disrupt, the way they and their employees do business.

On May 3rd nearly two hundred business leaders, entrepreneurs, and innovators gathered outside Philadelphia to share their unique perspectives and strategies as they encounter an increasingly volatile and ever-changing technological world.

This year’s Phorum, an annual one-day conference hosted by the Philadelphia Alliance for Capital and Technologies (PACT), featured a series of panel discussions with business leaders and technology experts who focused on the rapidly increasing and disruptive impact that artificial intelligence (AI) is having on the world of global business.

“I don’t think anyone can doubt that artificial intelligence is changing the way that lives are lived,” said Michael Bachman, Phorum’s chairman and principal technologist at Boomi. “It’s not just about business, it’s not just about commerce, it’s not just about education. This is a societal shift, and we’re at a pivotal moment, right now.”

Reinventing Business, Responsibly

While artificial intelligence technologies were first introduced decades ago, the emergence of Large Language Models (LLMs) like Open.ai’s ChatGPT and Google’s Bard have led to an explosion of both accessibility and attention. For the first time, even individuals without technical skills are able to interact with powerful technologies that are reinventing how we think about business and work.

With the theme of “Embracing Volatility: AI in Technology, Talent, and Planet,” the event featured conversations on the use of AI in business innovation and also the inherent risks facing just about everyone as these technologies become mainstream.

While advances in AI are generating unprecedented excitement and unlocking new possibilities that once seemed more like science fiction, some industry leaders have urged caution, or even alarm, over the technology’s potential for unintended consequences.

“It’s up to us to not only look at the benefits and strengths that this superpower can give us, but also the potential risks, and what our obligations are to our society to make sure that we imbue in our businesses, in our relationships, in our education, in our government, in any system that we have under which we work to make it better, effective for us as humans, and also safe,” Bachman said.

AI Opportunities and Risks

Arlen Shenkman, Boomi’s CFO and president, joined a panel of business executives to discuss the opportunities and risks facing companies as they begin to introduce the use of AI into their daily operations and explore new ways to put these tools to work.

Shenkman said he sees AI as a way to improve operational efficiency and automate critical business processes, allowing businesses to save time and money and accelerate growth.

“I think there is a major impact in terms of productivity and process and operating discipline that AI has a chance to play. How you run your business and how it fits in there,” Shenkman said, also commenting that AI advances will impact how companies develop software products, manage roadmaps, and deliver value and insights to customers.

At the same time, Shenkman said he believes companies are still figuring out how to enable the use of the technology in a way that is safe, secure, and leads to positive results for the business.

“I think without responsibility there is no accuracy, and without accuracy we will quickly find ourselves in a position where we won’t know what’s correct and what’s not. I think that’s the scariest thing about this,” he said, pointing to what he sees as a strong need for collaboration between private businesses and public institutions to ensure not only the veracity of AI-generated content, but also to maintain public trust in its use.

“I think it is important that there is some public-private partnership, simply because I worry about the durability of the technology if no one can trust it at all,” Shenkman added.

Many technology leaders have expressed similar concerns, some even more dire. In March, the Future of Life Institute published an open letter calling for a six month pause in the development and testing of AI technologies to allow for further study of the risks created by the widespread use of new tools like GPT-4. The original letter featured over 600 signatures from some of the largest names in tech. Since then, over 27,000 more people have signed the letter online.

The letter states:

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

AI: Good vs. Evil

At Phorum, that debate continued in a panel titled “AI: Good vs Evil,” where experts exchanged ideas on both the need for and feasibility of AI regulation, and discussed in greater depth whether AI would ultimately benefit or harm humanity as a whole.

The panel included James Thomason, CTO of the growth-focused private equity firm Next Wave Partners, who said he was one of the technology leaders who originally signed the letter calling for a pause in AI development.

“I put my name to that because I think that any kind of a pause would be better than no pause, at this point,” he said. “ I think we’re going to have to have not only regulation but more of an open set of standards, bodies, people from different disciplines looking at this problem because I don’t think we have a good answer right now.”

Thomason was joined by Ethan Mollick, an associate professor at the University of Pennsylvania’s Wharton School of Business, who said that he agrees with the concerns behind calls for an AI pause but ultimately believes the push came too late. Instead of broader bans on the technology as a whole, Mollick said the focus should be on understanding impacts to employment and education and equipping new generations of workers with the skills they will need to remain competitive.

Transformative Technology

“I think it’s probably too late for a lot of the kinds of regulations that you’re talking about, the cat is out of the bag,” Mollick said, referencing the dramatic gains in efficiency many industries have already realized using publicly available AI tools like GPT-4.

Mollick, who teaches entrepreneurship at Wharton, said he is now requiring the use of AI tools in every class he teaches. His students are using it to create working code, create marketing materials and images, and produce better quality writing — and, he added, they are producing 10 times the volume of output.

“It has been this incredible capability increase for everything they do, and all I can do is tell them [they] need to learn these tools…and figure out how to move quickly in the space that’s here — it’s been transformative,” Mollick said.

Business Disruption

While the panel acknowledged the most extreme fears surrounding AI, like a future superintelligence becoming self-aware and eradicating all of human existence, there appeared to be consensus that the more realistic and imminent threat was the potential for massive disruption to labor markets and to education systems.

“I’m much more worried about the short term impact, both good and bad, of jobs, impact on jobs, impact on education,” Mollick said. “That stuff is going to happen regardless of whether or not AI wakes up and eats us all.”

He added, “I think we have too much concern over an AI apocalypse and not enough over the fact that we have a general purpose technology with the potential of causing massive disruption, that doesn’t need any advances from today, to already mess with almost every field that we’re talking about.”

Mollick recalled a recent conversation with one executive who views AI as a way to drastically reduce his skilled-labor force and replace large numbers of workers with cheaper labor.

“I think we need to think about large scale industrial regulation, what happens when jobs are replaced, how do we re-skill, what areas is it ok to be fooled that you’re talking to an AI? Is it ok if you’re a customer service team to reach an AI? Who is responsible for using AI correctly inside work environments? I think those sets of changes are going to happen very quickly on a grand scale and if we’re not putting regulatory frameworks in place for them, we’re in trouble,” he said.

An Optimistic Approach

Caroline Yap, the Director of AI practice at Google, who also joined the panel, said that replacing humans is not the advice the technology giant and leading AI innovator is offering to their many enterprise customers who are exploring the use of AI.

“We do not advise that,” Yap said, referring to fears that AI could lead to widespread job losses as humans are replaced by machines on a large scale. “Any kind of transformation requires people and process to a certain degree. You still have to train it. You still have to train it to how your business corpus is and not just abandon it.”

Yap said most of the businesses she is working with at Google are taking an optimistic approach to the technology and focusing on how it can enable transformation while amplifying the productivity of their existing workforce.

“Most of them are actually seeing it in a more positive way because they are finding ways to embrace technology much more differently than when virtualization first came about,” she said. “We’re seeing a lot of that, how can they use AI to augment their staff and augment the knowledge that’s required for people and how can they use AI to train people faster, versus just replacing, carte blanche, everyone.”

Self-Policing Concerns

Still, some technologists like Thomason find it difficult to trust that the free market will lead for-profit businesses to do the right thing on their own, without oversight and regulation. He compared AI advances to the massive disruption caused by the introduction of the Internet, a change that he said society still has not fully come to terms with some three decades later.

“Look what we did with the internet. We’ve been running a planet-wide experiment for about 30 years that tests the limits of our adaptation, and that is to unleash everyone together to bring communication proximity to a point that has never been done by our species before,” he said. “When technology becomes adaptive, when it becomes complex, it starts to impose its own sort of rational logic on society. It starts to impose requirements on our legal system. It starts to transform our economy. It starts to transform our jobs. And we have absolutely no idea how it’s going to do that — everyone here, including myself, could be absolutely wrong.”

Despite these concerns and potential risks, no speaker questioned the raw transformative force that AI will be for both private individuals and businesses. Even with so much uncertainty, there was little doubt that these tools will bring about fundamental changes to just about everything we do.

A Human-Centric View

Peter Coffee, the vice president of strategic research at Salesforce, made several virtual appearances throughout the day. He acknowledged the turbulent macroeconomic situation that has some business leaders feeling cautious about investing in new and unproven technologies, but urged companies to embrace AI tools to solidify their position during difficult times, instead of shying away from it.

“If you’re in a hole, don’t buy lawn furniture to get comfortable at the bottom of the hole, buy a ladder and get out of it,” he said. “Buy a rocket belt and be above the field while everyone else is still in their holes, this is how you lead the next ‘good time’ instead of merely surviving a ‘bad time.”

For an event that focused so much on machine intelligence, speakers like Coffee took a human-centric view when summarizing their thoughts on the future ahead. The effectiveness of these technologies, and their overall impact, will come down largely to the humans who build and interact with them.

“We have to equip people not to just believe in a future but also to be effective change agents for that future,” Coffee said. “So we need to not merely convey possibility but document it, and support it, and give them the weapons they’ll need to become effective advocates for change when they get back to the office, and not just encounter the friction and inertia that keeps things from happening.”

While the event featured more questions than answers about AI’s future, there’s little doubt the technology will continue to impact, if not disrupt, much of the modern world.

Want to learn about Boomi’s perspective on AI? Read our executive brief, “Why an AI-First Strategy is Essential for Success.