|||

Video Transcript

X

Heroku AI | Managed Inference and Agents

Over the last couple of years, we’ve repeatedly heard the question “who will build the Heroku of AI?”. The answer to that question is that Heroku will, of course.

We are excited to bring AI to the Heroku platform with the pilot of Managed Inference and Agents, delivered with the graceful developer and operational experience and composability that are the heart of Heroku.

Heroku’s Managed Inference and Agents provide access to leading AI models from the world's top AI providers. These solutions optimize the developer and operator experience to easily extend applications on Heroku with AI. Heroku customers can benefit from this high performance and high trust AI service to focus on their core business needs, while avoiding the complexity and overhead of trying to run their own AI infrastructure and systems.

Heroku AI

At its creation, Heroku took something desirable but complicated—deploying and scaling Rails applications—and made it simple and accessible, so that developers could focus on the value of their applications rather than all the complexity of deploying, scaling, and operating them.

Today, Heroku is doing the same with AI. We’re delivering a set of capabilities that enable developers to focus on the value of their applications augmented with AI, rather than taking on the complexity of operating this rapidly evolving technology. Managed Inference and Agents is the initial offering of Heroku AI, and the cornerstone of our strategic approach to AI on Heroku.

Managed Inference and Agents

Developing applications that leverage AI often means interoperating with large language models (LLMs), embedding models (to power retrieval augmented generation or RAG), and various image or multi-modal models that support content beyond text. The range of model types is vast, their value in different domains are quite variable, and their APIs and configurations are often divergent and complex.

Heroku Managed Inference provides access to an opinionated set of models, chosen for their generative power and performance, optimized for ease of use and efficacy in the domains our customers need most.

Adding access to an AI model in your Heroku application is as easy as heroku ai:models:create in the Heroku CLI. This provides the environment variables for the selected model, making it seamless to call from within your application.

To facilitate model testing and evaluation, the Heroku CLI also provides heroku ai:models:call, allowing users to interact with a model from the command line, simplifying the process of optimizing prompts and context, and debugging interactions with AI models.

Heroku Agents extend Managed Inference with an elegant set of primitives and operations, allowing developers to create AI agents that can execute code in Heroku’s trusted Dynos, as well as call tools and application logic. These capabilities allow agents to act on behalf of the customer, and to extend both application logic and platform capabilities in developer-centric ways. Allowing developers to interleave application code, calls to AI, execute logic created by AI, and use of AI tools, all within the programmatic context.

Join the Pilot Today

Heroku Managed Inference and Agents is now in Pilot, and we invite you to join this exciting phase of the product to push the boundaries of AI applications. Apply to join the Managed Inference and Agents Pilot here, and please send any questions, comments, or requests our way.

Check out this blog for more details about how Heroku, a Salesforce company, supercharges Agentforce.

Originally published: December 02, 2024

Browse the archives for news or all blogs Subscribe to the RSS feed for news or all blogs.