If you haven’t already, please see the two previous posts in this series to learn how to setup a local LLM and how to easily play with it in your web UI.

AI Agents

It all started in February of 2024, when I learned of this white paper “More Agents Is All You Need” by Junyou Li, Qin Zhang, Yangbin Yu, Qiang Fu, Deheng Ye. You can read a community discussion on this paper here on Hacker News.

Essentially, this paper pitched that running the same query against the same LLM, without sharing data between the invocations, then ranking the responses and choosing the best one seems to be a practical way to improve responses.

While this paper seems to make an argument against the usage of an ensemble of AI Agents, it did introduce me to that concept, which led me to looking for an open source framework to play with.

Introducing Crew AI

After comparing a few AI Agent frameworks against each other, I ended up choosing Crew AI to play with. Crew AI is a relatively new framework which is described as:

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.

Agents

Agents are the life blood of Crew AI. Pretend that each one is an individual in an organization or company, with a specific background and set of skills. This is main feature that Crew AI offers, allowing specialized Agents to perform the tasks they’re best at. I’ve been extending the below MyAgents class to include a variety of Agents I think I might use, and I just pull each agent I need, when I need it.

Tasks

These are the activities that you’d like your Agents to perform. From research, to writing, formatting, and even coding. Each task is assigned to an Agent.

Tools

Crew AI leverages Tools to perform tasks that are not originally out of the box with Crew AI. For example, performing a google search, performing a mathematical calculation, looking up a stock value, reading or writing to a file, etc…

This is the original feature that drew me to Crew AI, as I had believed that this was a novel idea. It turns out it is not particularly novel, and you can even leverage the Open AI Chat GPT REST API to achieve this as well, by using their function calling feature. However, in the sprit of running everything locally, I’m still excited to achieve this with a self hosted Ollama instance. With Crew AI, you’re able to write these custom tools and even leverage a repository full of already written tools. This will really expand the possibilities of working with LLMs!

For some reason, Crew AI is dealing with a bug where some locally hosted Ollama models cannot properly interact with tools, and either fail to run all together or get stuck in an infinite loop. You can read more about that at these links:

Crew

Your crew is just your collection of Agents, and their assigned Tasks. Think of it as the single object you’re configuring to run.

Minified Code Example

Let’s look at minified example to understand how to use Crew AI:

from crewai import Agent, Task, Crew
from langchain_community.llms import Ollama


class MyAgents:
  def __init__(self):
    self.model = 'mistral:7b'
    self.llm = Ollama(model=self.model, base_url='http://192.168.1.124:11434')
    self.all_tools = []
    self.allow_delegation = False
    self.verbose = False

  def researcher(self) -> Agent:
    return Agent(
      role='Senior Research Analyst',
      goal='Uncover cutting-edge developments in AI and data science',
      backstory='You work at a leading tech think tank. Your expertise lies in identifying emerging trends. You have a knack for dissecting complex data and presenting actionable insights.',
      allow_delegation=self.allow_delegation,
      tools=self.all_tools,
      llm=self.llm,
      verbose=self.verbose
    )

  def writer(self) -> Agent:
    return Agent(
      role='Tech Content Strategist',
      goal='Craft compelling content on tech advancements',
      backstory='You are a renowned Content Strategist, known for your insightful and engaging articles. You transform complex concepts into compelling narratives.',
      allow_delegation=self.allow_delegation,
      tools=self.all_tools,
      llm=self.llm,
      verbose=self.verbose
    )


  def formatter(self) -> Agent:
    return Agent(
      role='formatter',
      goal='Format the text as asked. Leave out actions from discussion members that happen between brackets, eg (smiling).',
      backstory='You are an expert text formatter.',
      allow_delegation=self.allow_delegation,
      tools=self.all_tools,
      llm=self.llm,
      verbose=self.verbose
    )


# Prepare our agents
my_agents = MyAgents()
researcher_agent = my_agents.researcher()
writer_agent = my_agents.writer()
formatter_agent = my_agents.formatter_agent()

# Define our tasks
task1 = Task(
  description='Conduct a comprehensive analysis.',
  expected_output='Full analysis report in bullet points of the health industry in the united states.',
  agent=researcher_agent
)
task2 = Task(
  description='Using the insights provided, develop an engaging blog post that highlights the most significant concepts of the health care industry in the US. Your post should be informative yet accessible, catering to an internet audience. Make it sound cool, avoid complex words so it does not sound like AI.',
  expected_output='Full blog post of at least 4 paragraphs',
  agent=writer_agent
)
task3 = Task(
  description='Using the information provided, format the written blog post in the style of markdown. Take advantage of headers, bold, italics, ordered lists, unordered lists, links, etc. Ensure that the formatting is easier to read for the end user.',
  expected_output='A formatted writeup in markdown.',
  agent=formatter_agent
)

# Configure out crew
crew = Crew(
  agents=[researcher_agent, writer_agent, formatter_agent],
  tasks=[task1, task2, task3],
  function_calling_llm=agents.llm
)

# Run our crew
result = crew.kickoff()
print(result)

# Investigate how the crew did
print(crew.usage_metrics)

The output of the crew, as requested, is a markdown writeup:

# Welcome to the Future of US Healthcare: AI and Data Science Transforming the Industry

Imagine a world where our healthcare system is smarter, more efficient, and personalized to each individual's unique needs. welcome to the future of US healthcare, where Artificial Intelligence (AI) and data science are transforming the industry in incredible ways.

## Generating Insight from Healthcare Data
Our healthcare system generates an astounding amount of data daily - from electronic health records to medical imaging. But how do we make sense of it all? That's where AI comes in.

By analyzing vast amounts of data, AI can help:
- **Diagnose diseases earlier and more accurately**
- **Predict disease progression**
- **Discover new drugs**
- **Assist in surgeries**

## Overcoming Challenges: Data Privacy, Security, and Regulatory Compliance
But the potential benefits of AI in healthcare aren't without challenges. Three critical areas require our attention:

1. **Data privacy and security**: We don't want sensitive medical information falling into the wrong hands.
2. **Regulatory compliance**: The US healthcare industry is complex and heavily regulated, making it a tough nut to crack for innovative technologies.

## Embracing the Future: Personalized Treatment Plans and Early Diagnosis
Despite these challenges, the future of healthcare in the US looks promising. We can expect continued advancements in technology that will revolutionize how we approach healthcare. As consumers, we can look forward to:
- **Personalized treatment plans based on genetic makeup or lifestyle factors**
- **Early diagnosis for potential health issues before they become serious**

## Conclusion
The US healthcare industry is on the brink of a technological revolution. With AI and data science at the forefront, we can look forward to more efficient, personalized, and accurate healthcare services. But as we embrace these advancements, it's essential that we prioritize data privacy, regulatory compliance, and ethical implications to ensure a bright future for all. Stay tuned for more exciting developments in US healthcare!

Overall, it looks like the three agents did exactly what we asked of them!

Swapping To Use Open AI’s Chat GPT

If you wanted to, you could simply change the line:

self.llm = Ollama(model=self.model, base_url='http://192.168.1.124:11434')

to

  os.environ['OPENAI_API_KEY'] = 'sk-...'
  self.llm=OpenAI(
      temperature=0.8,
      model_name='gpt-3.5-turbo'
  )

And suddenly you’re using Open AI’s Chat GPT for all your Agents. There’s nothing stopping you from using a blend either, some locally running against Ollama, and others running remotely using Open AI’s Chat GPT. You’ll have to have an Open AI account, with some money in balance, before you can successfully call GPT 3.5, 4, etc… I find 3.5 Turbo to be sufficient for my tasks, and much cheaper.

Dockerization

One of my blog posts would not be completed without the mention of Docker. Here is the Dockerfile:

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt ./

RUN pip install -r requirements.txt

and here is Docker Compose file I use to interact with Crew AI in a developer fashion:

version: '3.8'

services:
  ai-agents:
    image: ai-agents
    container_name: ai-agents
    build:
      context: .
      dockerfile: Dockerfile
    working_dir: /app
    volumes:
      - .:/app
    # command: tail -f /dev/null # If we want to shell exec and play around, keep the container running.
    command: ["python", "main.py"] # Default command to run
    restart: unless-stopped
    depends_on:
      - ollama # Defined in my previous blog post
      - ollama-webui # Defined in my previos blog post

To run the container, simply run $ docker compose up, then wait for your final output to be produced.

If you are looking for a more interactive, development friendly approach, simply swap the two defined command lines above, and shell exec into the running container yourself by running $ docker exec -it crew-ai /bin/bash. Then with that active terminal run the command you want, like $ python main.py.

Notification

See this post on how to get notifications when a long process ends. I use this to let myself know when my Crew is finished its work, especially when I’m running on a CPU instead of say a faster GPU.