daveshap / ace_framework Goto Github PK
View Code? Open in Web Editor NEWACE (Autonomous Cognitive Entities) - 100% local and open source autonomous agents
License: MIT License
ACE (Autonomous Cognitive Entities) - 100% local and open source autonomous agents
License: MIT License
Principles for ACE Framework Project
Don't wait for permission or controls. This is a purely volunteer group so if something resonates, go for it. Experiment. Try stuff. Break stuff. Share your results. Use your own sandboxes, report results, and together we'll decide what get's pulled into MAIN via pull request. But as one member said: we need more data! So the principle here is to engage in activities that generate more data, more telemetry, so we can see and feel what's working and what isn't.
While we generally agree we want to be model agnostic, but acknowledge there are problems with this, the overarching principle is that we don't want to get locked into working with any single vendor. OpenAI in particular. This means we tinker with multiple providers, vendors, models, etc. This also means a preference for Open Source wherever possible. This general principle covers several areas.
The team acknowledges that the tasks the framework can accomplish are constrained by the capabilities provided to it. Identifying the potential task space is critical, and the framework should be designed with the types of tasks it should be able to accomplish in mind. This means that milestones and capabilities should be measured by tests and tasks so that we can remain empirical and objective oriented.
The team agrees on the importance of not overcomplicating the project from the start. Modest milestones and a focus on what is feasible are recommended. As we're doing nothing short of aiming for fully autonomous software, we need to not "boil the ocean" or "eat the whole elephant." Small, incremental steps while keeping the grand vision in mind.
We're exploring an entirely new domain of software architecture and design. Some old paradigms will obviously still apply, but some won't. Thus, we are discovering and deriving new principles of autonomous software design as we go. This adds a layer of complexity to our task.
copy/pasta should be copy/paste
https://github.com/daveshap/ACE_Framework/blame/f1b99784f3a308511a6d6591375aec5a4fc69df1/agile.md#L12
By design, The ACE framework relies heavily on natural language processing to communicate between layers.
While NLP makes the system human-readable and aligns well with the idea of cognitive entities, it also introduces the possibility of ambiguity, misunderstandings, or misalignments due to the complexities inherent in natural language.
Different models may interpret the same sentence differently, and the 'constitution' passed down through the layers could contain principles that are vague or contradictory when reinterpreted.
Have you considered implementing structured logging alongside free-text responses? This could serve multiple purposes:
Enhanced Monitoring: Structured logs could provide a more granular view into the system's state, making debugging and monitoring easier.
Plugin Points for Tooling: The metadata could serve as plugin points for both internal and external tools, offering hints on environmental states, additional capabilities, and resources.
Disambiguation: The metadata could help in disambiguating natural language instructions, ensuring that all layers have a uniform understanding of directives.
We need a testing framework.
3 seconds
pytest
Message delivery by carrier pigeon.
pip uninstall pytest
import unittest
# Commit sins...
Proposal for a RabbitMQ Logging Subscriber with Browser-Based UI
Objective:
Develop a robust RabbitMQ logging subscriber that records messages from all topics and provides a browser-based user interface (UI) for real-time log display. The UI will also support viewing archived log files and offer a search functionality to filter logs based on user input.
Logging Subscriber:
Exchange Type: The subscriber will bind to the RabbitMQ topic exchange using a # binding key to ensure all topic messages are captured.
Queue Properties: The logging queue will be durable and non-exclusive to ensure its persistence across broker restarts and accessibility across multiple connections.
Message Handling: Messages will be acknowledged post successful logging to ensure no loss of messages. Exception handling will be robust to handle any processing anomalies without interrupting the subscription.
Web-Based User Interface (UI):
Real-Time Display: As logs are written by the subscriber, they will be streamed to the UI, allowing real-time visibility of RabbitMQ messages.
(FUTURE) Log Archive Viewing: Users will have the option to select archived log files. Once selected, the UI will display logs from the chosen archive, allowing historical message viewing.
Search Functionality: A search input box will be provided, enabling users to filter currently displayed logs based on the entered text. This will aid in quick troubleshooting and log analysis.
Technical Components & Libraries:
Backend:
Language: Python.
RabbitMQ Client: pika for message subscription.
Web Framework: Flask for serving the WebSocket.
Frontend: TDB - with FE dev resource
Framework: Svelte.dev + Vite
Real-Time Updates: WebSockets (e.g., Flask-SocketIO) for streaming logs to the UI.
UI Components: Material UI and Svelte components.
Tailwind for styling
The solution will be designed to run locally on consumer-grade hardware. The application's modular nature will allow for future enhancements, including possible cloud deployments, distributed logging.
Not sure if this is a small bug or I failed to grasp a concept, but it seems the initial description under the ## API INTERACTION SCHEMA prompt mixes up the North and South bus roles. For example in layer2.txt
# API INTERACTION SCHEMA
The USER will give you logs from the NORTH and SOUTH bus. Information from the SOUTH bus should be treated as lower level telemetry from the rest of the ACE. Information from the NORTH bus should be treated as imperatives, mandates, and judgments from on high. Your output will be two-pronged.
My understanding is that the SOUTH bus is coming from on high and the NORTH bus represents lower-level telemetry. Should the above prompt swap NORTH and SOUTH?
Also, just wanted to say I love the work you all are doing and am very excited to see this framework built out. I'm particularly looking forward to how this will be realized with a UI and management system.
The Autonomous Cognitive Entity (ACE) framework is a collection of resources designed to function together. However, the framework specification provides no specific implementation for how these resources are managed. This includes starting the ACE, monitoring the components for failure, recovering from failures, and stopping the ACE.
The team is prepared to invest approximately two weeks and 20-30 total man hours to address this problem.
The proposed solution involves creating a 'Resource Manager', a long-running Python process. This process will treat ACE components as 'resources'. Each resource will be managed by a Python agent that operates in a manner similar to the Open Cluster Framework (OCF), specifically implementing its core 'start', 'stop', and 'monitor' methods. The operations are as follows:
Given that the initial ACE is an MVP, the team should not be overly concerned with issues of scaling and performance, beyond those necessary for an individual user of the MVP to have a good experience.
In order to maintain focus on the core problem and keep the solution manageable, the following aspects will not be included in this initial implementation:
When watching Dave's demo of the project, a big standout were his remarks of timing out the API when just running the demo briefly, and seeing the amount of inferences that will need to be generated.
I don't think this limitation is necessary, and depending on a third party is not ideal. The limitation should rather be the amount of compute available, and getting this to run on consumer hardware would be the best.
As such, I suggest using the dolphin-2.1-mistral-7b model.
Specifically a quantised version that can run with a maximum ram requirement of only 7.63 GB and a download size of only 5.13gb.
Using the llama-cpp-python bindings, which meets the project requirements of only being in python.
This is just a suggestion, and this model will become outdated within the week.
But I think that this is truly the right way to go.
Link the 1 hour long overview on the root Readme.md.
https://www.youtube.com/watch?v=A_BL_pu4Gtk
Destination: https://github.com/daveshap/ACE_Framework/blob/main/README.md
At the end of the "Layer 1 Aspirational Layer" section of ACE_Framework.md it says "got it, here is an expanded version:" and then proceeds to "Layer 2: Global Strategy Layer". Is this saying that the Global Strategy Layer is an expanded version of the Aspirational Layer? Or was there originally another text block after the ':' which was then deleted prior to the original upload?
As artificial intelligence becomes increasingly integrated into society, there's a growing urgency to ensure that these systems can navigate ethical landscapes and make morally sound decisions, a capability currently lacking in most AI solutions.
We aim to build a prototype within the next two weeks. This timeline will allow us to create a foundational structure for the Aspirational Layer, capable of issuing moral judgments and making ethical decisions. We will also be able to perform initial tests to validate its effectiveness and reliability.
This layer serves as an ethical compass within an AI system, its components are made up of the following agents and subsystems:
Constitution: This is a static text file that will be loaded on init. It contains the Heuristic Imperatives, The Statement on the UDHR, and the Mission Statement. It will be exactly as outlined in the framework.
Heuristic Check Agent: This agent will be responsible for reviewing content generated by the other layers. It will only issue a pass/fail ruling. A pass will write to the bus for human verification and logging.
Heuristic Reflexion Agent: This agent will be called when a check fails from the check agent. It can either rewrite the response and pass it back to the Check Agent for verification again, or it can send feedback to the generating agent on lower layers.
The Aspirational Layer will interface through a "northbound bus" for inputs and a "southbound bus" for outputs, making it compatible and scalable within the ACE Framework architecture. We are not defining the format of these messages at this time, for now we will stick to natural language until a need for message formatting is identified.
Human Interaction: Designing how the Aspirational Layer will interact with human operators to confirm or override its decisions.
Feedback Loop: Creating a mechanism where the Aspirational Layer learns from its decisions and judgments, involving some form of human oversight or memory/reflexion.
No Legal Guarantees: While the layer aims to make ethically sound decisions, it does not promise to always align with legal standards or regulations.
No Specific Religious or Cultural Ethics: The ethical framework will aim to be as universal as possible and will not cater to specific religious or cultural ethical guidelines.
No Feedback Mechanism: There won't be a feature for users to agree or disagree with the ethical judgments made, as the focus is on delivering a calculated ethical decision based on pre-defined principles.
By addressing these components, we aim to build an Artificial Cognitive Entity that not only mimics human-like thinking and decision-making but also incorporates the ethical considerations that are fundamental to human interaction.
I saw a comment from Dave on Youtube that ACE_Framework can be used with local models but it appears the Stacey demo is not capable yet of running on a local-llm.
How can you use Stacey with a local model instead of Openai?
I have already setup https://github.com/Josh-XT/Local-LLM/
Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints.
Instead of it going online to OpenAI how can you make it go to http://localhost:8091/v1 ?
Or is there an easier way of making it work with local models?
Hey, nice project but I don't see any code, where is it? or maybe i am dumb
"produeces" should be "produces" in Fig 8 description of the Paper.
provably should be probably
This Reference might be wrong for the following text from the Paper.
"Neural memory architectures provide dynamic episodic state tracking [ 53 ]"
Here is the reference [53] An introduction to variational autoencoders. Foundations and Trends® in Machine Learning
In that PDF (book), I Didn't find these words, phrases
--- memory (except in references), Neural memory, memory architectures, state tracking, tracking
Found book here --> https://zlib-articles.se/book/79113047/78c0f8
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.