LangChain is a framework for working with large language models. It lets a model call functions, use tools, connect with databases, and follow workflows. LangChainLangChain is a framework for working with large language models. It lets a model call functions, use tools, connect with databases, and follow workflows. LangChain

How to Build Your First AI Agent and Deploy it to Sevalla

\ Artificial intelligence is changing how we build software.

Just a few years ago, writing code that could talk, decide, or use external data felt hard.

Today, thanks to new tools, developers can build smart agents that read messages, reason about them, and call functions on their own.

One such platform that makes this easy is LangChain. With LangChain, you can link language models, tools, and apps together. You can also wrap your agent inside a FastAPI server, then push it to a cloud platform for deployment.

This article will walk you through building your first AI agent. You will learn what LangChain is, how to build an agent, how to serve it through FastAPI, and how to deploy it on Sevalla.

What is LangChain

LangChain is a framework for working with large language models. It helps you build apps that think, reason, and act.

\ Langchain

A model on its own only gives text replies, but LangChain lets it do more. It lets a model call functions, use tools, connect with databases, and follow workflows.

Think of LangChain as a bridge. On one side is the language model. On the other side are your tools, data sources, and business logic. LangChain tells the model what tools exist, when to use them, and how to reply. This makes it ideal for building agents that answer questions, automate tasks, or handle complex flows.

Many developers use LangChain because it is flexible. It supports many AI models. It fits well with Python.

Langchain also makes it easier to move from prototype to production. Once you learn how to create an agent, you can reuse the pattern for more advanced use cases.

I have recently published a detailed langchain tutorial here.

Building Your First Agent with LangChain

Let us make our first agent. It will respond to user questions and call a tool when needed.

We will give it a simple weather tool, then ask it about the weather in a city. Before this, create a file called .env and add your openai api key. Langchain will automatically use it when making requests to Openai.

OPENAI_API_KEY=<key>

Here is the code for our agent:

from langchain.agents import create_agent from dotenv import load_dotenv # load environment variables load_dotenv() # defining the tool that LLM can call def get_weather(city: str) -> str: """Get weather for a given city.""" return f"It's always sunny in {city}!" # Creating an agent agent = create_agent( model="gpt-4o", tools=[get_weather], system_prompt="You are a helpful assistant", ) result = agent.invoke({"messages":[{"role":"user","content":"What is the weather in san francisco?"}]})

This small program shows the power of LangChain agents.

First, we importcreate_agent, which helps us build the agent. Then we write a function called get_weather. It takes a city name and returns a friendly sentence.

The function acts as our tool. A tool is something the agent can use. In real projects, tools might fetch prices, store notes, or call APIs.

Next, we call create_agent. We give it three things. We pass the model we want to use. We list the tools we want it to call. And we give a system prompt. The system prompt tells the agent who it is and how it should behave.

Finally, we run the agent. We call invoke with a message.

The user asks for the weather in San Francisco. The agent reads this message. It sees that the question needs the weather function. So it calls our tool get_weather, passes the city, and returns an answer.

Even though this example is tiny, it captures the main idea. The agent reads natural language, figures out what tool to use, and sends a reply.

Later, you can add more tools or replace the weather function with one that connects to a real API. But this is enough for us to wrap and deploy.

Wrapping Your Agent with FastAPI

The next step is to serve our agent. FastAPI helps us expose our agent through an HTTP endpoint. That way, users and systems can call it through a URL, send messages, and get replies.

To begin, you install FastAPI and write a simple file like main.py. Inside it, you import FastAPI, load the agent, and write a route.

When someone posts a question, the api forwards it to the agent and returns the answer. The flow is simple.

The user talks to FastAPI. FastAPI talks to your agent. The agent thinks and replies.

Here is the FAST api wrapper for your agent.

from fastapi import FastAPI from pydantic import BaseModel import uvicorn from langchain.agents import create_agent from dotenv import load_dotenv import os load_dotenv() # defining the tool that LLM can call def get_weather(city: str) -> str: """Get weather for a given city.""" return f"It's always sunny in {city}!" # Creating an agent agent = create_agent( model="gpt-4o", tools=[get_weather], system_prompt="You are a helpful assistant", ) app = FastAPI() class ChatRequest(BaseModel): message: str @app.get("/") def root(): return {"message": "Welcome to your first agent"} @app.post("/chat") def chat(request: ChatRequest): result = agent.invoke({"messages":[{"role":"user","content":request.message}]}) return {"reply": result["messages"][-1].content} def main(): port = int(os.getenv("PORT", 8000)) uvicorn.run(app, host="0.0.0.0", port=port) if __name__ == "__main__": main()

Here, FastAPI defines a /chat endpoint. When someone sends a message, the server calls our agent. The agent processes it as before. Then FastAPI returns a clean JSON reply. The API layer hides the complexity inside a simple interface.

At this point, you have a working agent server. You can run it on your machine, call it with Postman or cURL, and check responses. When this works, you are ready to deploy.

Deployment to Sevalla

You can choose any cloud provider, like AWS, DigitalOcean, or others to host your agent. I will be using Sevalla for this example.

Sevalla is a developer-friendly PaaS provider. It offers application hosting, database, object storage, and static site hosting for your projects.

Every platform will charge you for creating a cloud resource. Sevalla comes with a $50 credit for us to use, so we won’t incur any costs for this example.

Let’s push this project to GitHub so that we can connect our repository to Sevalla. We can also enable auto-deployments so that any new change to the repository is automatically deployed.

You can also fork my repository from here.

Log in to Sevalla and click on Applications -> Create new application. You can see the option to link your GitHub repository to create a new application

Use the default settings. Click “Create application”. Now we have to add our openai api key to the environment variables. Click on the “Environment variables” section once the application is created, and save the OPENAI_API_KEY value as an environment variable.

Now we are ready to deploy our application. Click on “Deployments” and click “Deploy now”. It will take 2–3 minutes for the deployment to complete.

Once done, click on “Visit app”. You will see the application served via a url ending with sevalla.app . This is your new root url. You can replace localhost:8000 with this url and test in Postman.

\ \ Postman Response

Congrats! Your first AI agent with tool calling is now live. You can extend this by adding more tools and other capabilities, and push your code to GitHub, and Sevalla will automatically deploy your application to production.

Conclusion

Building AI agents is no longer a task for experts. With LangChain, you can write a few lines and create reasoning tools that respond to users and call functions on their own.

By wrapping the agent with FastAPI, you give it a doorway that apps and users can access. Finally, Sevalla makes it easy to push your agent live, monitor it, and run it in production.

This journey from agent idea to deployed service shows what modern AI development looks like. You start small. You explore tools. You wrap them and deploy them.

Then you iterate, add more capability, improve logic, and plug in real tools. Before long, you have a smart, living agent online. That is the power of this new wave of technology.

Hope you enjoyed this article. Signup for my free newsletter TuringTalks.ai for more hands-on tutorials on AI. You can also visit my website.

\

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

The post CEO Sandeep Nailwal Shared Highlights About RWA on Polygon appeared on BitcoinEthereumNews.com. Polygon CEO Sandeep Nailwal highlighted Polygon’s lead in global bonds, Spiko US T-Bill, and Spiko Euro T-Bill. Polygon published an X post to share that its roadmap to GigaGas was still scaling. Sentiments around POL price were last seen to be bearish. Polygon CEO Sandeep Nailwal shared key pointers from the Dune and RWA.xyz report. These pertain to highlights about RWA on Polygon. Simultaneously, Polygon underlined its roadmap towards GigaGas. Sentiments around POL price were last seen fumbling under bearish emotions. Polygon CEO Sandeep Nailwal on Polygon RWA CEO Sandeep Nailwal highlighted three key points from the Dune and RWA.xyz report. The Chief Executive of Polygon maintained that Polygon PoS was hosting RWA TVL worth $1.13 billion across 269 assets plus 2,900 holders. Nailwal confirmed from the report that RWA was happening on Polygon. The Dune and https://t.co/W6WSFlHoQF report on RWA is out and it shows that RWA is happening on Polygon. Here are a few highlights: – Leading in Global Bonds: Polygon holds 62% share of tokenized global bonds (driven by Spiko’s euro MMF and Cashlink euro issues) – Spiko U.S.… — Sandeep | CEO, Polygon Foundation (※,※) (@sandeepnailwal) September 17, 2025 The X post published by Polygon CEO Sandeep Nailwal underlined that the ecosystem was leading in global bonds by holding a 62% share of tokenized global bonds. He further highlighted that Polygon was leading with Spiko US T-Bill at approximately 29% share of TVL along with Ethereum, adding that the ecosystem had more than 50% share in the number of holders. Finally, Sandeep highlighted from the report that there was a strong adoption for Spiko Euro T-Bill with 38% share of TVL. He added that 68% of returns were on Polygon across all the chains. Polygon Roadmap to GigaGas In a different update from Polygon, the community…
Share
BitcoinEthereumNews2025/09/18 01:10
Uniswap & Monero Chase Gains: While Zero Knowledge Proof’s Presale Auctions Target Record $1.7B

Uniswap & Monero Chase Gains: While Zero Knowledge Proof’s Presale Auctions Target Record $1.7B

The cryptocurrency market is riding a decisive wave of optimism, with its total valuation firmly holding above $3.2 trillion. This renewed risk appetite, underscored
Share
Techbullion2026/01/17 13:00
Trump’s renewed attacks on the Fed evoke 1970s inflation fears and global market backlash

Trump’s renewed attacks on the Fed evoke 1970s inflation fears and global market backlash

The post Trump’s renewed attacks on the Fed evoke 1970s inflation fears and global market backlash appeared on BitcoinEthereumNews.com. The Trump administration
Share
BitcoinEthereumNews2026/01/17 13:36