Guardrails for Gen AI: A Perspective from Davos
The evolving landscape of Artificial Intelligence
As a first-time attendee at the World Economic Forum in Davos this year, I got a front-row seat to the evolving narrative around Artificial Intelligence (AI) and its impact on the global business landscape. As the CEO of Sendbird, a communications platform powering many of the largest consumer apps worldwide, I find this a timely topic.
Here’s my biggest key takeaway for other business leaders: double down on generative AI to outcompete in the areas of organizational productivity, the speed of communications, and customer engagement.
Let’s discuss these business priorities in more detail.
In Davos, the conversation around Artificial Intelligence focused on its significant impact on various sectors, including the influence of AI on the speed of organizational response. Key discussions emphasized AI's potential to drive productivity and economic growth, with projections indicating that AI could generate substantial productivity gains across different industries (IBM cited a 20-40% boost in efficiency for programmers using AI).
Speed of communications
These productivity gains are significant because a critical insight shared during the event was that fast-moving companies reap the benefits of superior operational resilience, financial performance, growth, and innovation. Hence, speed is a critical factor in organizational success. The recognition that fast-moving companies demonstrate superior performance underlines the need for near-immediate and intelligent internal (employee) and external (customer) communications.
Never have we been closer to automating intelligent business-to-customer conversations as we are today with generative AI. The transformative potential of generative AI is revolutionizing the business communication landscape by enabling autonomous conversations when only one customer or user is involved. In the B2C context, this translates to the ability of businesses to rapidly respond to customer queries with highly personalized replies and even predict customer needs before they arise, without any human labor or involvement.
That said, the power of generative AI also presents emerging ethical concerns around safety, trust, and ownership. Careful regulation and oversight of generative AI was an important topic of discussion at this year’s World Economic Forum. Leaders debated whether prompts should be copyrighted and how to differentiate between human-made and machine-generated content (where specific experts predict the majority of online content will be AI-generated by 2025).
The careful balance of innovation and regulation of generative AI remains a vast topic following the event. The emphasis on ethical AI use mandates businesses to be more mindful of how AI is applied in customer communications. This involves ensuring data privacy and security, plus avoiding biases in AI algorithms and misinformation in AI chatbot communications. But before jumping to strict government regulations, we should start with guidelines for restricted content along with moderation and authentication of the parties communicating.
Across the communications technology landscape, we already see providers moving toward moderation. For example, OpenAI offers a moderation endpoint for moderating content based on its usage policies. Google has also announced an AI-generated content policy that will go into effect on January 31, 2024. Based on this policy, Google will require Android app developers to include a way for users to report offensive AI-generated content within any Google Play app that generates AI content. At Sendbird, our R&D team is investigating how to integrate a generative AI solution like our own SmartAssistant into the advanced moderation process of our communications API platform.