Coder Social home page Coder Social logo

daveshap / ace_framework Goto Github PK

View Code? Open in Web Editor NEW
1.4K 1.4K 203.0 31.46 MB

ACE (Autonomous Cognitive Entities) - 100% local and open source autonomous agents

License: MIT License

Python 73.96% kvlang 0.11% Dockerfile 0.54% Shell 0.08% JavaScript 0.59% TypeScript 11.59% HTML 1.13% CSS 0.26% Svelte 6.67% Ruby 5.07% SCSS 0.01%

ace_framework's People

Contributors

alanh90 avatar anselale avatar databassgit avatar daveshap avatar dpearson00 avatar eltociear avatar georgiaphillips1008 avatar hkniberg avatar lancecarlson avatar orlandox683 avatar samgriek avatar thehunmonkgroup avatar tyjk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ace_framework's Issues

Proposed PRINCIPLES

Principles for ACE Framework Project

1. Be Scrappy

Don't wait for permission or controls. This is a purely volunteer group so if something resonates, go for it. Experiment. Try stuff. Break stuff. Share your results. Use your own sandboxes, report results, and together we'll decide what get's pulled into MAIN via pull request. But as one member said: we need more data! So the principle here is to engage in activities that generate more data, more telemetry, so we can see and feel what's working and what isn't.

2. Avoid Vendor Lockin

While we generally agree we want to be model agnostic, but acknowledge there are problems with this, the overarching principle is that we don't want to get locked into working with any single vendor. OpenAI in particular. This means we tinker with multiple providers, vendors, models, etc. This also means a preference for Open Source wherever possible. This general principle covers several areas.

3. Task-Constrained Approach

The team acknowledges that the tasks the framework can accomplish are constrained by the capabilities provided to it. Identifying the potential task space is critical, and the framework should be designed with the types of tasks it should be able to accomplish in mind. This means that milestones and capabilities should be measured by tests and tasks so that we can remain empirical and objective oriented.

4. Avoiding Overcomplication

The team agrees on the importance of not overcomplicating the project from the start. Modest milestones and a focus on what is feasible are recommended. As we're doing nothing short of aiming for fully autonomous software, we need to not "boil the ocean" or "eat the whole elephant." Small, incremental steps while keeping the grand vision in mind.

5. Establish New Principles

We're exploring an entirely new domain of software architecture and design. Some old paradigms will obviously still apply, but some won't. Thus, we are discovering and deriving new principles of autonomous software design as we go. This adds a layer of complexity to our task.

Question: Natural Language Dependence and utilizing Structured Logging in the ACE Framework

By design, The ACE framework relies heavily on natural language processing to communicate between layers.

While NLP makes the system human-readable and aligns well with the idea of cognitive entities, it also introduces the possibility of ambiguity, misunderstandings, or misalignments due to the complexities inherent in natural language.

Different models may interpret the same sentence differently, and the 'constitution' passed down through the layers could contain principles that are vague or contradictory when reinterpreted.

Have you considered implementing structured logging alongside free-text responses? This could serve multiple purposes:

  • Enhanced Monitoring: Structured logs could provide a more granular view into the system's state, making debugging and monitoring easier.

  • Plugin Points for Tooling: The metadata could serve as plugin points for both internal and external tools, offering hints on environmental states, additional capabilities, and resources.

  • Disambiguation: The metadata could help in disambiguating natural language instructions, ensuring that all layers have a uniform understanding of directives.

Proposed ROADMAP

Proposed Roadmap for ACE Framework

  1. Foundations of Single Agent Autonomy: Develop and refine the core capabilities of a single autonomous agent. This includes defining its scope of tasks, the tools it can utilize, and the principles governing its self-directed behavior. Establish robust memory systems and internal communication protocols that allow the agent to process and retain information effectively. Ensure that the agent can perform tasks independently and handle basic decision-making processes.
  • Define agent capabilities and toolset
  • Develop self-direction and decision-making algorithms
  • Implement memory systems for information retention
  • Establish internal communication protocols
  • Creating and using tools on the fly
  • Self-checking and self-consistency* (stretch goal)
  • Self-modification* (stretch goal)
  1. Flat Network Collaboration: Implement and test a flat network where multiple agents can communicate and collaborate within a shared environment. Explore different communication models (e.g., round-robin, chat room, asynchronous messaging) to determine the most effective methods for agent interaction. Focus on enabling agents to share information, coordinate on tasks, and work towards common goals without hierarchical structure, all within a single operational domain or container.
  • Enable multi-agent communication within a single environment
  • Test various communication models for effectiveness
  • Coordinate shared tasks and collaborative goals
  • Ensure information sharing and task synchronization
  1. Cross-Container Team Dynamics: Expand the communication framework to support teams of agents operating across multiple containers or isolated environments. This stage involves establishing protocols for inter-container communication, ensuring that agents can collaborate effectively even when distributed across different systems or locations. Address challenges related to synchronization, data consistency, and task delegation among siloed agent teams.
  • Create protocols for communication across isolated environments
  • Manage distributed agent collaboration and data consistency
  • Facilitate task delegation and execution among siloed teams
  • Overcome challenges in synchronization and inter-container operations
  1. Hierarchical Communication Systems: Introduce hierarchical structures to the agent network, allowing for more complex organization and coordination. Develop vertical communication channels that enable agents to escalate issues, seek guidance, or report outcomes to higher-level agents. Simultaneously, maintain horizontal communication for peer-to-peer collaboration. This milestone focuses on creating a multi-layered network that can handle intricate tasks and workflows.
  • Introduce vertical communication for escalation and reporting
  • Maintain horizontal communication for peer-level collaboration
  • Develop a multi-layered network for complex task management
  • Implement control structures for information flow and task prioritization
  1. Self-Organizing Networks: Achieve the capability for agents to autonomously construct and optimize their own networks. Agents should be able to self-organize based on task requirements, environmental factors, and predefined objectives. This involves agents dynamically forming teams, assigning roles, and reconfiguring the network topology as needed. The principles established in previous milestones will guide the agents in creating efficient and effective networks that adapt to changing conditions and scale to accommodate various complexities.
  • Empower agents to autonomously form and optimize networks
  • Allow dynamic team formation and role assignment based on tasks
  • Adapt network topology to environmental changes and task demands
  • Scale networks to handle varying complexities autonomously

PITCH: Testing framework

PITCH: Testing framework

Problem

We need a testing framework.

Appetite

3 seconds

Solution

pytest

Rabbit Holes

Message delivery by carrier pigeon.

No-gos

pip uninstall pytest
import unittest
# Commit sins...

PITCH: ACE Framework message monitoring

Proposal for a RabbitMQ Logging Subscriber with Browser-Based UI

Objective:

Develop a robust RabbitMQ logging subscriber that records messages from all topics and provides a browser-based user interface (UI) for real-time log display. The UI will also support viewing archived log files and offer a search functionality to filter logs based on user input.

Logging Subscriber:

Exchange Type: The subscriber will bind to the RabbitMQ topic exchange using a # binding key to ensure all topic messages are captured.

Queue Properties: The logging queue will be durable and non-exclusive to ensure its persistence across broker restarts and accessibility across multiple connections.

Message Handling: Messages will be acknowledged post successful logging to ensure no loss of messages. Exception handling will be robust to handle any processing anomalies without interrupting the subscription.

Web-Based User Interface (UI):

Real-Time Display: As logs are written by the subscriber, they will be streamed to the UI, allowing real-time visibility of RabbitMQ messages.

(FUTURE)  Log Archive Viewing: Users will have the option to select archived log files. Once selected, the UI will display logs from the chosen archive, allowing historical message viewing.

Search Functionality: A search input box will be provided, enabling users to filter currently displayed logs based on the entered text. This will aid in quick troubleshooting and log analysis.

Technical Components & Libraries:

Backend:

Language: Python.

RabbitMQ Client: pika for message subscription.

Web Framework: Flask for serving the WebSocket.

Frontend: TDB - with FE dev resource

Framework: Svelte.dev + Vite

Real-Time Updates: WebSockets (e.g., Flask-SocketIO) for streaming logs to the UI.

UI Components: Material UI and Svelte components.

Tailwind for styling

  1. Deployment & Scalability:

The solution will be designed to run locally on consumer-grade hardware. The application's modular nature will allow for future enhancements, including possible cloud deployments, distributed logging.

API INTERACTION SCHEMA north vs. south clarification?

Not sure if this is a small bug or I failed to grasp a concept, but it seems the initial description under the ## API INTERACTION SCHEMA prompt mixes up the North and South bus roles. For example in layer2.txt

# API INTERACTION SCHEMA
The USER will give you logs from the NORTH and SOUTH bus. Information from the SOUTH bus should be treated as lower level telemetry from the rest of the ACE. Information from the NORTH bus should be treated as imperatives, mandates, and judgments from on high. Your output will be two-pronged.

My understanding is that the SOUTH bus is coming from on high and the NORTH bus represents lower-level telemetry. Should the above prompt swap NORTH and SOUTH?

Also, just wanted to say I love the work you all are doing and am very excited to see this framework built out. I'm particularly looking forward to how this will be realized with a UI and management system.

PITCH: Resource Manager for ACE Framework

PITCH: Resource Manager for ACE Framework

Problem

The Autonomous Cognitive Entity (ACE) framework is a collection of resources designed to function together. However, the framework specification provides no specific implementation for how these resources are managed. This includes starting the ACE, monitoring the components for failure, recovering from failures, and stopping the ACE.

Appetite

The team is prepared to invest approximately two weeks and 20-30 total man hours to address this problem.

Solution

The proposed solution involves creating a 'Resource Manager', a long-running Python process. This process will treat ACE components as 'resources'. Each resource will be managed by a Python agent that operates in a manner similar to the Open Cluster Framework (OCF), specifically implementing its core 'start', 'stop', and 'monitor' methods. The operations are as follows:

  • Step 1: Initiate the Resource Manager. It calls the 'start' operation on all resource agents. The order is determined by a resource dependency graph, stored in a static configuration such as a YAML file.
  • Step 2: Once all resources are started, the Resource Manager enters a monitoring loop. It calls the 'monitor' operation on one resource agent at a time, moving from the least dependent to the most dependent resource as per the dependency graph.
  • Step 3: If a monitor returns a failure, the Resource Manager takes corrective action. It calls the 'stop' operation, then the 'start' operation on the agent of the failed resource and its dependent resources.

Rabbit Holes

Given that the initial ACE is an MVP, the team should not be overly concerned with issues of scaling and performance, beyond those necessary for an individual user of the MVP to have a good experience.

No-gos

In order to maintain focus on the core problem and keep the solution manageable, the following aspects will not be included in this initial implementation:

  • Distributed resource management: The resource manager will not attempt to manage resources across a distributed system or network. It will focus solely on resources within a single ACE.
  • Automatic scaling or load balancing: While these are common features in robust resource managers, they won't be part of this initial solution.
  • Advanced failure recovery strategies: The resource manager will have a simple strategy of stopping and starting a failed resource and its dependencies. More advanced strategies, like resource migration or failover, will not be considered at this stage.

COST TO RUN & DEPENDENCE on OpenAI

When watching Dave's demo of the project, a big standout were his remarks of timing out the API when just running the demo briefly, and seeing the amount of inferences that will need to be generated.

I don't think this limitation is necessary, and depending on a third party is not ideal. The limitation should rather be the amount of compute available, and getting this to run on consumer hardware would be the best.

As such, I suggest using the dolphin-2.1-mistral-7b model.
Specifically a quantised version that can run with a maximum ram requirement of only 7.63 GB and a download size of only 5.13gb.
Using the llama-cpp-python bindings, which meets the project requirements of only being in python.

There are benefits to doing it this way:

  • No dependence on a third party for the LLM (THE MOST ESSENTIAL COMPONENT)
  • No cost besides the electricity bill, and obviously upfront hardware cost

And benefits to this model specifically

  • Higher benchmark performance than LLama 70B
  • Apache 2.0, meaning commercially viable
  • Completely uncensored, which gives it higher performance and higher compliance to the system and user prompts
  • Small model, which means higher performance and lower memory requirements
  • Quantised model, which means it can run with a maximum ram requirement of 7.63 GB
  • GGUF format, which has massive support for many different bindings, with CPU/GPU or CPU&GPU support

This is just a suggestion, and this model will become outdated within the week.
But I think that this is truly the right way to go.

aspirational layer documentation confusion

At the end of the "Layer 1 Aspirational Layer" section of ACE_Framework.md it says "got it, here is an expanded version:" and then proceeds to "Layer 2: Global Strategy Layer". Is this saying that the Global Strategy Layer is an expanded version of the Aspirational Layer? Or was there originally another text block after the ':' which was then deleted prior to the original upload?

PITCH: Aspirational Layer Artificial Cognitive Entity (ACE)

Problem

As artificial intelligence becomes increasingly integrated into society, there's a growing urgency to ensure that these systems can navigate ethical landscapes and make morally sound decisions, a capability currently lacking in most AI solutions.

Appetite

We aim to build a prototype within the next two weeks. This timeline will allow us to create a foundational structure for the Aspirational Layer, capable of issuing moral judgments and making ethical decisions. We will also be able to perform initial tests to validate its effectiveness and reliability.

Solution

This layer serves as an ethical compass within an AI system, its components are made up of the following agents and subsystems:

  1. Constitution: This is a static text file that will be loaded on init. It contains the Heuristic Imperatives, The Statement on the UDHR, and the Mission Statement. It will be exactly as outlined in the framework.

  2. Heuristic Check Agent: This agent will be responsible for reviewing content generated by the other layers. It will only issue a pass/fail ruling. A pass will write to the bus for human verification and logging.

  3. Heuristic Reflexion Agent: This agent will be called when a check fails from the check agent. It can either rewrite the response and pass it back to the Check Agent for verification again, or it can send feedback to the generating agent on lower layers.

The Aspirational Layer will interface through a "northbound bus" for inputs and a "southbound bus" for outputs, making it compatible and scalable within the ACE Framework architecture. We are not defining the format of these messages at this time, for now we will stick to natural language until a need for message formatting is identified.

Rabbit Holes

  1. Human Interaction: Designing how the Aspirational Layer will interact with human operators to confirm or override its decisions.

  2. Feedback Loop: Creating a mechanism where the Aspirational Layer learns from its decisions and judgments, involving some form of human oversight or memory/reflexion.

No-gos

  1. No Legal Guarantees: While the layer aims to make ethically sound decisions, it does not promise to always align with legal standards or regulations.

  2. No Specific Religious or Cultural Ethics: The ethical framework will aim to be as universal as possible and will not cater to specific religious or cultural ethical guidelines.

  3. No Feedback Mechanism: There won't be a feature for users to agree or disagree with the ethical judgments made, as the focus is on delivering a calculated ethical decision based on pre-defined principles.

By addressing these components, we aim to build an Artificial Cognitive Entity that not only mimics human-like thinking and decision-making but also incorporates the ethical considerations that are fundamental to human interaction.

How can you use Stacey with a local model instead of Openai?

I saw a comment from Dave on Youtube that ACE_Framework can be used with local models but it appears the Stacey demo is not capable yet of running on a local-llm.

How can you use Stacey with a local model instead of Openai?

I have already setup https://github.com/Josh-XT/Local-LLM/
Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints.

Instead of it going online to OpenAI how can you make it go to http://localhost:8091/v1 ?

Or is there an easier way of making it work with local models?

Where is the code?

Hey, nice project but I don't see any code, where is it? or maybe i am dumb

Typo in the ACE Paper + Possible Reference Issue

provably should be probably

This Reference might be wrong for the following text from the Paper.
"Neural memory architectures provide dynamic episodic state tracking [ 53 ]"
Here is the reference [53] An introduction to variational autoencoders. Foundations and Trends® in Machine Learning
In that PDF (book), I Didn't find these words, phrases
--- memory (except in references), Neural memory, memory architectures, state tracking, tracking
Found book here --> https://zlib-articles.se/book/79113047/78c0f8

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.