top of page
Поиск

Autonomous AI Agents Are Broadening Business Risk — How Security Leaders Should Respond

  • Фото автора: Andrej Botka
    Andrej Botka
  • 6 мая
  • 2 мин. чтения

Automated systems that act on their own are gaining direct access to corporate data and, in many cases, operate without human checks. Security teams and boards need to treat non-human identities as a core control problem — not an afterthought.


For security and IT teams, the immediate danger is clear: programs that can make decisions and carry out tasks — often called autonomous agents or, in some security circles, OpenClaw — are becoming common users of systems and data. Industry research warns that as many as two in three organizations now let such agents touch sensitive information without basic protections. And automated credentials are increasingly doing the logging-in: more than four out of five authentication attempts now come from non-human systems, yet they get scrutiny in fewer than one out of 20 security reviews.


These agents do real work. In larger firms — those with roughly 1,000 employees — it’s not unusual to find on the order of ten thousand machine-facing connections: API keys, OAuth tokens, service accounts and other secrets. Nearly one in four apps tied to Google Workspace can read, change or delete data, and about one-half of tokens linking Salesforce to third-party tools sit unused and provide stale entry points. That means a finance bot could move funds, a support bot could alter customer records, and an engineering bot could change code — all under a machine identity rather than a human signature.


The problem stems from control systems designed for people. Access governance traditionally assumes a human at the other end of a login; it’s poorly equipped for programs that act autonomously and can proliferate credentials. Practitioners report they’re wrestling with decades-old accounts that lack the contextual metadata needed for modern monitoring, creating blind spots across the enterprise.


Fixes are organizational as much as technical. Boards should elevate automated-identity management to the same priority as patching and ransomware defense. Security teams need platforms that make non-human identities visible and that can interpret what an agent is being asked to do. Analysts predict that by 2028 roughly seven out of ten chief security officers will require identity-intelligence tools that review both human and machine access. Tools that simulate attacks on language-based agents and test prompt-based weaknesses are becoming a practical part of development pipelines, and industry initiatives are focusing on tighter privileged access controls and behavior-based monitoring.


On the ground, companies should start by inventorying every non-human account, removing unused tokens, rotating secrets and enforcing least privilege. Separate sensitive duties so no single automated identity can complete a high-risk workflow alone, and bake automated security tests into CI/CD. As one hypothetical security director at a regional bank put it, “Treat machine accounts like employees: give them only what they need, check what they do, and make someone accountable.” Delay only increases exposure — and this time the “user” at fault might not be a person.

 
 
 

Комментарии


Subscribe here to get our latest posts

© 2035 by The StartupsCentral. 

  • Facebook
  • Twitter
bottom of page