AI Agent capable of automating various tasks using MCP
Looking for the Python version? Check out Saiku.py.
This project aims to create a robust, intelligent AI Agent capable of automating various tasks. Our agent is designed following the PEAS (Performance measure, Environment, Actuators, Sensors) framework to ensure it's robust, scalable, and efficient.
Saiku leverages the Model Context Protocol (MCP), a cutting-edge standard for enabling AI models to interact with external tools and resources securely and efficiently. MCP is becoming increasingly vital in the AI landscape, allowing agents like Saiku to:
By building on MCP, Saiku ensures a flexible, extensible, and future-proof architecture for AI agent development. Learn more about MCP here.
"Saiku" (細工) in Japanese refers to detailed or delicate work, symbolizing the intricate and intelligent workings of our AI agent.
We chose a Japanese name to symbolize precision, innovation, and advanced technology, attributes highly respected in Japanese culture. Even though we are based in Tunisia, we believe in global collaboration and the universal appeal and understanding of technology.
PEAS stands for Performance measure, Environment, Actuators, and Sensors. It's a framework used to describe the various components of an intelligent agent:
Saiku can be integrated into your applications to leverage its agent capabilities.
npm install saiku in your project directory.import Agent from 'saiku';
// Or specific components if needed
async function main(opts) {
// Ensure MCP client/server setup is handled appropriately
const agent = new Agent(opts); // Initialize the agent
// ...
}
AgentOptions:
string | optional): Default system message or instructions for the LLM.boolean | optional): Flag to enable/disable code execution (typically handled by a dedicated MCP server now).boolean | string | optional): Interactive mode setting for CLI usage.'openai' | 'vertexai' | 'ollama' | 'huggingface' | 'mistral' | 'anthropic'): Specifies the language learning model. Default is 'openai'.optional): Allows additional custom properties.Example Configuration:
let opts = {
systemMessage: "You are Saiku, an AI assistant.",
interactive: true,
llm: "openai",
// Custom options
};
Process:
Example Interaction:
// Assuming 'agent' is an initialized Agent instance
async function runInteraction(agent, userQuery) {
agent.messages.push({ role: "user", content: userQuery });
await agent.interact(); // Agent processes query, potentially using MCP tools
// Handle agent's response (last message in agent.messages)
}
Clone the Repository:
git clone https://github.com/nooqta/saiku.git
Navigate to Project Folder:
cd saiku
Install Dependencies:
npm install
Run the Project Locally:
Before starting Saiku locally, build the project:
npm run build
To start the agent in interactive CLI mode:
npm start
For automated building during development:
npm run watch
Global installation is possible but not recommended due to ongoing development.
npm install -g saiku
https://github.com/nooqta/saiku/assets/3036133/87752826-fc6a-4c16-91a7-917b0f79427a
(Note: May require updates for MCP compatibility)
Configure necessary environment variables for the core agent and any MCP servers you intend to use. Copy the example environment file:
cp .env.example .env
Edit the .env file. Minimally, you need an LLM API key:
# OpenAI (Example)
OPENAI_API_KEY=your_openai_api_key
OPENAI_MODEL=gpt-4-turbo # Or another model
# Add other API keys as needed for specific MCP servers
# e.g., ELEVENLABS_API_KEY=... for the ElevenLabs MCP server
# e.g., GOOGLE_APPLICATION_CREDENTIALS=path/to/your/keyfile.json for Google Cloud servers
Refer to the documentation of individual MCP servers for their specific environment variable requirements.
Use the Saiku CLI with various options:
AI agent to help automate your tasks
Options:
-v, --version Output the current version.
-exec, --allowCodeExecution (Deprecated - Handled by MCP) Execute code without prompting.
-role, --systemMessage The model system role message.
-m, --llm <model> Specify the language model (openai, vertexai, ollama, huggingface, mistral, anthropic). Default: openai.
-h, --help Display help for command.
Commands:
mcp [options] Manage MCP servers.
workflow [options] Manage and run workflows.
autopilot [options] (Experimental) Run Saiku in autopilot mode.
serve Chat with the Saiku agent in the browser.
help [command] Display help for a specific command.
To start the interactive CLI with a specific LLM:
npm start -- -m ollama
To run a specific workflow:
npm start -- workflow run <workflow_name>
To list connected MCP servers:
npm start -- mcp list
To chat with Saiku in the browser:
npm start -- serve
Saiku achieves tasks by leveraging tools provided by connected MCP servers or through specific extensions.
Saiku includes a workflow engine that allows you to define complex, multi-step tasks in a JSON format. These workflows can chain together multiple LLM calls and MCP tool uses to automate sophisticated processes.
workflows.json for examples).npm start -- workflow listnpm start -- workflow run <workflow_name> [input_data]We welcome contributions! Please follow these steps:
git checkout -b feature/YourFeature)git commit -m 'Add some feature')git push origin feature/YourFeature)We are actively seeking sponsors and contributors. Your support helps accelerate development.
Please open an issue on our GitHub repository for feedback or bug reports.
Be mindful of the rate limits and costs associated with the LLM APIs and any external services used by MCP servers.
Saiku is under active development. Expect changes to the architecture and features.
This project is licensed under the MIT License - see the LICENSE.md file for details.
No configuration available
Related projects feature coming soon
Will recommend related projects based on sub-categories