Answering Reddit’s top questions about AI agents

AI agents—software systems that act autonomously on behalf of users—are as promising as they are polarizing. Agents have taken the world by storm: investment is surging, speculation is rampant, and people everywhere are asking what to make of them, or how best to harness them.
Since the rise of AI agents, Sendbird has been hard at work combining our award-winning, enterprise-grade communication APIs with agentic AI. The result, not for lack of rigor or humility, is an AI agent platform that helps enterprises to connect with customers anytime, anywhere, without limits.
In this article, our in-house AI experts and product team share their perspectives on Redditor’s biggest questions about AI agents—helping you cut through the hype and understand the reality on the ground.
Question 1: Are AI agents just hype?

Short answer: AI agents are real and groundbreaking technology—but most aren’t yet production-ready. Overhype is common.
Why the hype exists:
Breakthrough potential: Unlike previous automation, AI agents can reason, make decisions, use tools and external data, and complete tasks end-to-end in pursuit of the goals set for them.
Rapid progress: AI agents can handle a wider range of tasks than previous AI and even set its own goals, creating excitement for the future of work. The technologies underlying agents (e.g., generative AI, machine learning) are also advancing.
Buzz & investment: Companies eager to capitalize on the AI trend are spuriously labeling products as AI agents in attempts to attract customers and investors.
Why doubt remains:
“Agent washing”: Many products marketed as “AI agents” are just automation with a new label to appeal to the AI trend.
Limitations vs ambitions: Currently, agents can misfire, hallucinate, or loop (pathway explosion), especially in complex workflows.
High failure rates: A 2025 RAND study found 80–90% of AI projects never leave the pilot phase; Gartner expects 40% of agent projects to be scrapped by 2027. There’s more enthusiasm than execution at present.
Reality check: AI agents aren’t magic—nor are they vaporware. Like the transformative cloud computing before it, agents need strong safety, governance, and transparency layers to earn enterprise trust and adoption. Until safety layers are standard, hype will outpace results.
How to make agents work today:
Start small and specific: Agents are more likely to deliver returns in narrow, enduring use cases (e.g., handling ecommerce returns, automating outreach) than broad, general-purpose tasks.
Balance human + AI: For instance, Klarna’s “all-in” approach to AI customer service received customer blowback until it reintroduced human agents in the mix.
Go slow to go fast: Early AI adopters see value when they pair realistic expectations with strong guardrails.
Governance = trust: Deployments need guardrails and frameworks to scale without compromising trust, compliance, and security. AI agent platforms like Sendbird, for example, address the transparency, fragility, and trust problem with enterprise-grade APIs with AI governance frameworks that make agents resilient, safe, and responsible.
Question 2: Do you know any real-world examples of using AI agents

Short answer: Yes—AI agents are already powering tasks across finance, healthcare, retail, and daily life. Most success so far comes from narrow, well-defined use cases.
Examples from everyday life:
Virtual assistants: Siri, Alexa, and Google Assistant can set reminders, manage schedules, and control smart home appliances using agentic AI capabilities.
Recommendation engines: Netflix, Spotify, and Amazon use AI agents to analyze user behavior and then personalize content, product recommendations, and more.
Smart devices: Consumer devices like smart thermostats, smart security systems, or robot vacuum cleaners now embed AI agents for enhanced capabilities.
Self-driving cars: Autonomous vehicles from companies like Waymo and Tesla use multiple AI agents to perceive the environment, make decisions, and execute actions.
In business and industry:
Finance: Mastercard’s AI agent scores 100B+ transactions annually for fraud risk in milliseconds; JPMorgan’s COiN analyzes legal docs; high-frequency trading relies on AI agents for speed.
E-commerce: Amazon uses AI agents for dynamic pricing and personalized recommendations; Sephora’s Virtual Artist enables virtual try-ons.
Customer service: Beyond scripted bots, AI agents handle multi-step tasks like refunds, transactions, and subscription renewals with user context. Klarna, for example, leaned too heavily into AI customer service before rebalancing with human support.
Healthcare: AI agents enhance diagnosis, drug discovery, and robotic surgery, while their autonomous, goal-oriented behavior frees up providers to deliver better care and outcomes.
Workplace tools: Microsoft Copilot agent summarizes documents, drafts responses, and automates workflows in real time.
Reality check: AI agents are being applied to a widening set of use cases, but still falter in overly broad or ambitious workflows they’re not yet ready for. This is why agentic frameworks and orchestration standards like MCP (Model Context Protocol) are emerging: they help agents coordinate their reasoning engines, APIs, and knowledge sources, improving performance and reliability across a growing range of tasks.
How to make AI agents work today:
Blend autonomy with human oversight, especially in sensitive domains or regulated industries like healthcare or finance.
Focus agents on high-volume, low-margin tasks (fraud checks, order tracking, recommendations). Here’s a 601-item list of examples from Google.
Layer in strong governance and transparency from the start to ensure reliable, responsible operations. AI agent platforms like Sendbird come with built-in AI observability tools and governance frameworks to scale agents with minimal risk.
Question 3: Developers building AI agents—what are your biggest challenges?

Short answer: Developers face hurdles at every layer, from data quality to performance bottlenecks, ensuring transparent operations to managing compute costs. A technology defined by novel autonomous operations and decision-making, AI agents present myriad challenges, even to experts.
Technical & architectural challenges:
Data quality & availability: Agents need access to high-quality, structured data to function, but often encounter scattered, noisy, or siloed systems, which undermine decision-making.
Memory & context: Managing agents’ long-term memory and session data is difficult, as APIs can only carry so much context before computing costs and latency spike.
Multi-agent system complexity: Coordinating multiple agents to interact around complex tasks presents challenges in communication, decision-making, and dependencies that can derail workflows.
Scalability: Scaling agents to handle bigger datasets, user volumes, and more complex tasks can lead to performance bottlenecks, unwieldy architectures, and ballooning costs.
Reliability & error handling: Agents can behave inconsistently, while API failures or edge cases can stall an entire workflow if monitoring and fallback rules aren’t in place.
Integration: Plugging agents into existing enterprise and legacy systems is often more difficult than building the AI agent itself.
Operational & financial challenges:
Computational costs: Running large-scale AI agents requires a lot of compute power, making cost control a constant concern and a major barrier for some.
Framework complexity: Overly complex agent frameworks can make simple tasks hard to implement. Developers must balance modular design with usability, and matching the right architecture and models to the appropriate use cases is a key decision point.
Ethical & control challenges:
Autonomy vs control: Agents acting autonomously can be unpredictable. Balancing their autonomy with human oversight and control is a recurring obstacle.
Bias & safety: Agents trained on biased data can reproduce these in production, resulting in unethical, irresponsible, or unsafe outputs without monitoring and guardrails.
Transparency & explainability: Agent decision-making is like a “black-box,” making it often difficult to comprehend, identify, and correct errors without proper observability tools.
Reality check: Even the best-designed and governed AI agents struggle when the expectations outpace their current capabilities—and it’s still early days. Much of what Sendbird does is manage pie-in-the-sky ideas of what agents are presently capable of.
Admittedly, developers who find that what’s marketed as an autonomous “agent” feels like a glorified workflow aren’t necessarily wrong. Agents are new, evolving, and their full potential remains untapped. Their frameworks feel complex now because they’re built to support capabilities that will only matter as agent use cases get more sophisticated, which is a function of time.
In the meantime, AI agent builders like Sendbird help developers and organizations to hurdle these myriad challenges with robust APIs, observability features, and AI governance frameworks that help keep agents' evolving workflows both manageable and reliable at scale.