Last updated: 2025-02-10 06:27:40
# HiveChain: Redefining AI Development
**What if building powerful AI workflows wasn’t restricted by closed, vendor-managed APIs?**
What if you—*the developer*—could orchestrate advanced reasoning, memory management, and multi-agent collaboration locally, with full transparency and a performance-based rewards model?
Meet **HiveChain**: an **open-source ecosystem** that flips the AI execution paradigm from corporate back ends to a **developer-centric front end**. By combining powerful AI orchestration with responsible alignment practices and a transparent rewards system, HiveChain stands at the forefront of **capable and conscientious AI**.
---
## Table of Contents
1. [Vision & Purpose](#vision--purpose)
2. [Key Features](#key-features)
3. [Why HiveChain?](#why-hivechain)
4. [Alignment in Practice](#alignment-in-practice)
5. [Contributor Incentive Model](#contributor-incentive-model)
6. [Open-Book Transparency](#open-book-transparency)
7. [Monetization Strategy](#monetization-strategy)
8. [Roadmap & Disclaimer](#roadmap--disclaimer)
9. [Getting Started](#getting-started)
10. [Contributing & Community Support](#contributing--community-support)
11. [Join Us](#join-us)
---
## Vision & Purpose
### A Bold New Era for AI
Most AI ecosystems today are locked behind opaque APIs and cloud services. This restricts how models interact, how memory is managed, and how agent-based tools evolve. **HiveChain** empowers developers to:
- **Regain Control:** Run advanced workflows locally, with no vendor lock-in.
- **Own the AI:** Contribute, earn rewards, and participate in a transparent, performance-based system.
- **Integrate Responsible Practices:** Harness advanced AI capabilities while incorporating ethical and safety considerations from the start.
We believe in **AI with purpose**: forging a future where both capabilities and accountability advance hand in hand.
---
## Key Features
1. **Multi-Model Orchestration**
- Seamlessly integrate OpenAI, DeepSeek, Mistral, LLaMA, and others.
- Switch between models at will—no more single-vendor bottlenecks.
2. **Local Reasoning & Memory**
- Full front-end **Retrieval-Augmented Generation (RAG)** and context management.
- Reproduce sophisticated AI workflows in your own environment.
3. **Agent-Based Execution**
- Chain together specialized AI agents to tackle multi-step tasks.
- Automate context-passing and dynamic reasoning for complex workflows.
4. **Alignment-Conscious by Design**
- Offers soft prompts, best practices, and optional guardrails without sacrificing developer autonomy.
- Provides transparent alignment logs so you can see exactly how safety considerations come into play.
5. **Developer-Centric Freedom**
- No forced ecosystem dependencies; HiveChain is fully open-source and built by developers for developers.
---
## Why HiveChain?
### The Gap
- **Closed-Source APIs** limit how far developers can push new AI paradigms.
- **Cloud Dependencies** hand ultimate control to service providers.
- **Alignment as an Afterthought** leaves critical safety and ethical questions unaddressed.
### Our Response
- **Open, Front-End Execution:** Bring the “brain” of advanced AI to your local environment, not a distant server.
- **Responsible Yet Unrestricted:** Integrate best practices from the outset without stifling innovation.
- **Transparent Rewards:** Ensure that every success is shared with those who contribute to our growth.
---
## Alignment in Practice
AI development often focuses heavily on capabilities, with alignment treated as an afterthought. At HiveChain, we believe **alignment is critical** from the start.
1. **Built-in Alignment Hooks**
- Our library includes hooks that can log, analyze, and optionally guide AI outputs. Developers can customize the depth of these interventions for their use case.
2. **Developer-Controlled “Soft Guardrails”**
- Rather than imposing top-down restrictions, HiveChain offers soft prompts and educational nudges to encourage responsible usage.
- When discussions stray into sensitive territory, the system gently flags them for review.
3. **Alignment Data Sharing**
- We maintain a separate repository of alignment research and anonymized logs (where permissible) to accelerate the broader community’s work in AI safety.
4. **Continual Evolution**
- As methods advance, HiveChain will continuously update its best practices, keeping alignment front and center.
---
## Contributor Incentive Model
HiveChain values every contribution and is committed to rewarding those who help shape our platform through a transparent, performance-based rewards system.
### How It Works
1. **AI-Driven Benchmarking**
- A specialized LLM system evaluates pull requests, design proposals, bug fixes, and documentation.
- Contributions are scored based on *impact, quality, and innovation*.
2. **Convertible Notes for Early Contributors**
- Early contributions will be recognized with convertible notes representing a **5% equity pool**.
- These notes grant dividend rights based on our performance and may, in the future, be converted into tokenized equity (subject to further refinement).
3. **Transparent Rewards**
- Our performance-based dividend system ensures that success is shared with both our investors and contributors.
---
## Open-Book Transparency
### 100% Public Financials
HiveChain operates under an **open-book policy**, documenting:
- **Revenue Streams:** Donations, sponsorships, or pro-tier subscriptions.
- **Expenses:** Infrastructure, development grants, marketing, and more.
- **Disbursements:** Direct allocations to community contributors.
### Governance Disclosures
- **No Hidden Salaries:** All compensations and stipends are openly disclosed.
- **Community Oversight:** Governance decisions are made transparently, ensuring alignment with collective goals.
---
## Monetization Strategy
HiveChain aims to remain financially sustainable without compromising its core open-source values:
1. **Donation & Community Funding (Early Stage)**
- Initially relying on donations and sponsorships to fund development.
2. **Pro Version & Enterprise Support (Future Offering)**
- An optional **Pro tier** will provide advanced enterprise-focused features (e.g., team collaboration, extended security, specialized performance analytics).
- The fundamental features of HiveChain remain free to ensure AI accessibility.
3. **Strategic Partnerships & Services (Long-Term)**
- Potential collaborations with enterprises, offering specialized consulting or hosting solutions to support continued R&D and best practices.
Our revenue approach ensures that we balance **open availability** with the practical resources needed to drive HiveChain’s evolution.
---
## Roadmap & Disclaimer
Below is a **flexible roadmap** highlighting our *current aspirations* for HiveChain’s development. These timelines and goals may evolve as the project matures.
1. **Alpha (Targeting Q1 2025)**
- Release of Python & Java core libraries.
- Foundational agent-based execution and local RAG/memory features.
- Initial alignment hooks for soft guardrails.
2. **Beta (Targeting Q2 2025)**
- Expanded multi-model support (e.g., LLaMA, Mistral).
- Enhanced alignment logs and developer-configurable guardrails.
- Early prototype of the contributor incentive dashboard.
3. **GUI-Integrated Platform (Tentative Q3 2025)**
- Visual interface for chaining agents, monitoring AI interactions, and orchestrating complex workflows.
- Real-time alignment notifications and analytics for easier oversight.
4. **Pro Tier Preview (Estimated Q4 2025)**
- Optional enterprise enhancements like advanced security and collaboration tools.
- Milestones to refine revenue distribution mechanisms.
5. **Long-Term Evolution (Ongoing)**
- Continued research, extended language coverage, deeper integrations, and community-led innovations.
- Further enhancements to our contributor incentive model as HiveChain grows.
> **Disclaimer:** The timeline and features in this roadmap are *aspirational targets* that may change as we adapt to developer feedback, resource availability, and the evolving AI landscape.
---
## Getting Started
### 1. Installing HiveChain
> **Note:** The initial Alpha release is *in progress*. The following commands represent our planned install process:
```bash
# Python (PyPI)
pip install hivechain
# Java (Maven)
<dependency>
<groupId>com.hivechain</groupId>
<artifactId>hivechain-core</artifactId>
<version>0.1.0</version>
</dependency>
{
"default_provider": "openai",
"providers": {
"openai": {
"api_key_env": "OPENAI_API_KEY",
"models": {
"o1": {
"engine": "o1",
"type": "openai",
"default_temperature": 0.7,
"max_token_input": 200000,
"max_token_output": 200000,
"per_token": {
"input": 0.015,
"cached_input": 0.0075,
"output": 0.060
}
},
"o3-mini": {
"engine": "o3-mini",
"type": "openai",
"default_temperature": 0.7,
"max_token_input": 200000,
"max_token_output": 200000,
"per_token": {
"input": 0.0011,
"cached_input": 0.00055,
"output": 0.0044
}
},
"gpt-4o": {
"engine": "gpt-4o",
"type": "openai",
"default_temperature": 0.7,
"max_token_input": 128000,
"max_token_output": 128000,
"per_token": {
"input": 0.0025,
"cached_input": 0.00125,
"output": 0.0100
}
},
"gpt-4o-mini": {
"engine": "gpt-4o-mini",
"type": "openai",
"default_temperature": 0.7,
"max_token_input": 128000,
"max_token_output": 128000,
"per_token": {
"input": 0.00015,
"cached_input": 0.000075,
"output": 0.0006
}
},
"gpt-3.5-turbo": {
"engine": "gpt-3.5-turbo",
"type": "openai",
"default_temperature": 0.7,
"max_token_input": 4096,
"max_token_output": 4096,
"per_token": {
"input": 0.0015,
"cached_input": 0.00075,
"output": 0.0020
}
}
}
},
"deepseek": {
"api_key_env": "DEEPSEEK_API_KEY",
"models": {
"deepseek-v3": {
"engine": "deepseek-v3",
"type": "deepseek",
"default_temperature": 0.6,
"max_token_input": 128000,
"max_token_output": 128000,
"per_token": {
"input": 0.00014,
"cached_input": 0.00007,
"output": 0.00219
}
},
"deepseek-r1": {
"engine": "deepseek-r1",
"type": "deepseek",
"default_temperature": 0.6,
"max_token_input": 64000,
"max_token_output": 8000,
"per_token": {
"input": 0.00027,
"cached_input": 0.00007,
"output": 0.00110
}
}
}
},
"anthropic": {
"api_key_env": "ANTHROPIC_API_KEY",
"models": {
"claude-3.5-sonnet": {
"engine": "claude-3.5-sonnet",
"type": "anthropic",
"default_temperature": 0.7,
"max_token_input": 200000,
"max_token_output": 200000,
"per_token": {
"input": 0.0030,
"cached_input": 0.0003,
"output": 0.015
}
},
"claude-3.5-haiku": {
"engine": "claude-3.5-haiku",
"type": "anthropic",
"default_temperature": 0.7,
"max_token_input": 200000,
"max_token_output": 200000,
"per_token": {
"input": 0.0008,
"cached_input": 0.00008,
"output": 0.004
}
},
"claude-3-opus": {
"engine": "claude-3-opus",
"type": "anthropic",
"default_temperature": 0.7,
"max_token_input": 200000,
"max_token_output": 200000,
"per_token": {
"input": 0.00375,
"cached_input": 0.0003,
"output": 0.015
}
}
}
},
"google": {
"api_key_env": "GOOGLE_API_KEY",
"models": {
"gemini-2.0-flash": {
"engine": "gemini-2.0-flash",
"type": "google",
"default_temperature": 0.7,
"max_token_input": 128000,
"max_token_output": 128000,
"per_token": {
"input": 0.0025,
"cached_input": 0.00125,
"output": 0.01
}
}
}
},
"meta": {
"api_key_env": "none",
"models": {
"llama-3.1": {
"engine": "llama-3.1",
"type": "meta",
"default_temperature": 0.7,
"max_token_input": 128000,
"max_token_output": 128000,
"per_token": {
"input": null,
"cached_input": null,
"output": null
}
}
}
},
"huggingface": {
"api_key_env": "none",
"models": {
"mistral-7b": {
"engine": "mistral-7b",
"type": "huggingface",
"default_temperature": 0.5,
"max_token_input": 8192,
"max_token_output": 8192,
"per_token": {
"input": null,
"cached_input": null,
"output": null
}
}
}
},
"alibaba": {
"api_key_env": "none",
"models": {
"qwen-2.5-max": {
"engine": "qwen-2.5-max",
"type": "alibaba",
"default_temperature": 0.7,
"max_token_input": 8192,
"max_token_output": 8192,
"per_token": {
"input": null,
"cached_input": null,
"output": null
}
}
}
}
},
"parameter_limits": {
"temperature": {
"min": 0.0,
"max": 1.0
},
"max_tokens": {
"min": 1,
"max": 200000
}
},
"features": {
"use_memory": true,
"use_retrieval": false
}
}
# Apache License 2.0 with HiveChain Commercial Clause
## Apache License
Version 2.0, January 2004
Copyright (c) HiveChain
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
## Additional Clause for HiveChain Commercial Use
In addition to the terms of the Apache 2.0 License, **commercial use of HiveChain software requires a separate commercial license.**
1. **Free Use for Individuals & Research**
- This software is free to use for personal, educational, and non-commercial research purposes.
- Open-source modifications and non-commercial forks are permitted under Apache 2.0.
2. **Commercial Licensing Requirement**
- Any business, organization, or entity using HiveChain for **commercial purposes** must obtain a commercial license.
- The HiveChain Commercial License governs business usage and includes fair pricing policies and governance rules.
- The commercial license terms can be found in the **HiveChain Commercial License Agreement**.
3. **License Enforcement**
- HiveChain reserves the right to **verify fair use** and enforce compliance with commercial licensing terms.
- Businesses that fail to obtain a commercial license may be required to retroactively purchase one or cease usage.
For details on commercial licensing, visit: **[HiveChain Commercial License Agreement]**
By using this software, you acknowledge that you understand and agree to these licensing terms.
# HiveChain Contributor License Agreement (CLA)
## Introduction
Thank you for your interest in contributing to the HiveChain project. This Contributor License Agreement (CLA) defines the terms under which contributions are made to HiveChain, ensuring that all intellectual property rights are retained by HiveChain and that the project remains free for personal use while maintaining commercial licensing requirements.
By contributing to this project, you agree to the following terms:
---
## 1. **Ownership & Intellectual Property Rights**
(a) All contributions to HiveChain become the **sole property of HiveChain**. Contributors retain no ownership or IP rights over their contributions.
(b) Any work submitted must be original, and contributors must have the legal right to assign ownership to HiveChain. If third-party work is included, it must be under a license that allows HiveChain to assume full ownership and control.
## 2. **License Grant & Usage**
(a) HiveChain grants **free use of the library for individuals and non-commercial purposes**.
(b) **Businesses must obtain a paid licensing agreement** to use HiveChain commercially.
(c) License pricing will be **affordable and subject to periodic review**.
(d) HiveChain retains the right to **modify license terms** for future versions.
(e) Businesses that have already obtained a valid license will **retain their right to use HiveChain under the original terms**, even if the business model changes in the future. Prices for continued use may be adjusted over time, but will remain **reasonable and subject to predefined rules** to ensure fairness.
(f) HiveChain reserves the right to conduct **fair use verification** to ensure that businesses claiming non-commercial use are complying with the licensing requirements.
## 3. **Patents & Liability**
(a) If your contribution includes patented technology, you grant HiveChain and its users a **free, non-exclusive, irrevocable license** to use, modify, and distribute the patented material **as part of HiveChain**.
(b) You agree **not to assert patent claims** against HiveChain or its users for any portion of the project that includes your contribution.
## 4. **Future Paid Tools & Licensing**
(a) Future tools built on HiveChain may be open-source but will **not be freely licensed for commercial use**.
(b) These tools will be available under **separate commercial licensing agreements**.
(c) Any modifications or derivatives of HiveChain must remain under HiveChain’s governance and cannot be relicensed under a more restrictive license by third parties, ensuring continuity and alignment with HiveChain’s open-source principles.
## 5. **Transparency & Dispute Resolution**
(a) All contributions and contributor records are **public and permanently recorded** as part of the version control system.
(b) **Disputes over contributions will be resolved transparently within the HiveChain governance process**.
(c) HiveChain **does not require NDAs or closed agreements** and aims to operate as an open-book project with publicly available financials and governance records.
## 6. **No Obligation & Warranties**
(a) Contributors are **not obligated to provide ongoing support** for their contributions.
(b) HiveChain is **not required to use or retain** any contribution permanently.
(c) All contributions are provided **"as is"** without warranties or guarantees.
---
By contributing, you acknowledge that you have read and agree to this CLA. If you are contributing on behalf of an employer or another entity, you certify that you have the authority to accept these terms on their behalf.
*This CLA is designed to align with HiveChain’s mission of transparency, intellectual property retention, and ethical AI development. If you have questions, contact the HiveChain maintainers.*
# Code of Conduct
## Our Pledge
We, as members, contributors, and leaders of the HiveChain community, pledge to make participation in our project a harassment-free experience for everyone. We commit to treating everyone with respect, professionalism, and patience, regardless of background or experience. Our community is built on collaboration and trust, and we will work together to foster an environment that is welcoming and inclusive.
## Our Standards
Examples of behavior that contribute to a positive environment for our community include:
- Demonstrating empathy and kindness toward others
- Being respectful of differing opinions, viewpoints, and experiences
- Providing and gracefully accepting constructive feedback
- Accepting responsibility and apologizing when mistakes occur, and learning from them
- Focusing on what is best for the community and the project
Examples of unacceptable behavior include:
- The use of sexualized language or imagery, and unwelcome sexual attention or advances
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others’ private information (such as a physical or email address) without explicit permission
- Other conduct that could reasonably be considered inappropriate in a professional setting
## Responsibilities & Enforcement
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that do not align with this Code of Conduct. They will communicate reasons for moderation decisions when appropriate.
## Scope
This Code of Conduct applies within all project spaces—such as issue trackers, discussion forums, chat channels, and project events—and it also applies when an individual is officially representing the project in public spaces. Representation includes acting as a maintainer, using an official project e-mail address or social media account, or participating as an appointed representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the HiveChain project team (refer to the project’s repository or website for contact information). All complaints will be reviewed and investigated promptly and fairly. Project maintainers are obligated to respect the privacy and security of the reporter and any others involved in an incident.
Project maintainers who do not follow or enforce this Code of Conduct in good faith may face temporary or permanent consequences as determined by the project’s leadership.
## Attribution
This Code of Conduct is adapted from the Contributor Covenant, version 2.1, available on the Contributor Covenant website. For answers to common questions about this Code of Conduct, see the FAQ on the Contributor Covenant website or contact the HiveChain maintainers for assistance.
# Contributing
Thank you for considering contributing to HiveChain! Your involvement helps shape the future of AI integration and makes HiveChain more robust and useful for everyone. This guide outlines how to get involved and the best practices to follow.
## How to Contribute
1. **Fork the Repository:**
Fork the HiveChain GitHub repository to your account to create a personal copy of the project.
2. **Create a Branch:**
Create a new branch in your fork for your contribution. Give it a descriptive name (e.g., `feature/add-XYZ-connector` or `bugfix/fix-integration-issue`) to indicate what you are working on.
3. **Make Changes:**
Develop your feature or bug fix in your branch. Follow the coding style and structure of the existing code. If you add a new integration module or significant feature, include documentation or comments to help others understand your work.
4. **Write Tests (If Applicable):**
If the project has automated tests, consider writing tests for your changes. This helps catch any issues early and ensures that your contribution does not break existing functionality.
5. **Commit Your Changes:**
Commit your changes with clear and descriptive commit messages that briefly state what was changed and why (e.g., `Add support for XYZ API connector to improve data import functionality`).
6. **Push and Open a Pull Request:**
Push your branch to your GitHub fork and open a Pull Request (PR) to the main HiveChain repository. In the PR description, provide a concise summary of your changes, why they are needed, and any relevant context. If your PR addresses an open issue, reference that issue number (e.g., “Fixes #42”).
7. **Collaborate During Review:**
A maintainer will review your pull request and may request changes or ask clarifying questions. Please be open to this feedback. Update your PR by pushing additional commits to your branch. The PR will automatically reflect these updates.
8. **Sign the CLA:**
Before your contribution can be merged, you need to sign the Contributor License Agreement (if you haven’t already done so). This is usually a quick process (often via an online form or bot instruction) and is required to ensure that you agree to license your contribution under the project’s terms. See the **Contributor License Agreement** document for details.
## Code Style and Guidelines
- **Consistency:**
Keep your code style consistent with the project. For Python code, follow PEP 8 guidelines. We recommend using linters/formatters (such as `flake8` or `black`) to format your code automatically.
- **Documentation:**
Update documentation for any user-facing changes. This might mean editing the README, adding a section to the Getting Started guide, or commenting your code as needed.
- **Focused Commits/PRs:**
Try to make each pull request focused on one topic or fix. Smaller, focused PRs are easier to review and merge. If you have multiple unrelated changes, split them into separate PRs.
## Communication
- **Opening Issues:**
If you have an idea or find a bug, feel free to open an issue on GitHub to discuss it before investing time in code changes. This can help gather input from maintainers or the community.
- **Community Channels:**
Join our community channels (such as Slack, Discord, or a mailing list, as listed on our site) to ask questions and engage with other contributors.
- **Respect and Professionalism:**
All interactions in the project are governed by our Code of Conduct. Be respectful and constructive in all communications.
## Acknowledgments
Every contribution is valuable—whether it is a major feature, a minor fix, or improvements to documentation. We thank everyone who helps improve HiveChain. By contributing, you become part of a community working together to build a powerful platform for AI integration.
# HiveChain Commercial License Agreement
## 1. **Introduction**
This HiveChain Commercial License Agreement ("Agreement") governs the use of the HiveChain library and software ("Software") by businesses and organizations for commercial purposes. By using the Software for commercial purposes, the business ("Licensee") agrees to the following terms.
## 2. **License Grant**
(a) HiveChain grants the Licensee a **non-exclusive, non-transferable, revocable license** to use the Software for commercial purposes, subject to the terms of this Agreement.
(b) This license **does not** grant ownership or any intellectual property rights to the Licensee. All rights remain with HiveChain.
(c) Licensees are permitted to **modify and use** the Software internally but may not relicense, sell, or distribute it outside their organization without a separate agreement.
## 3. **Commercial Licensing & Pricing**
(a) Businesses are required to obtain a commercial license to use the Software beyond non-commercial or personal use.
(b) The initial licensing fee will be set at a reasonable rate, subject to periodic review.
(c) To ensure fairness and predictability, **price increases for existing license holders will not exceed 20% every 4 years**.
(d) HiveChain reserves the right to adjust pricing for **new customers** while maintaining grandfathered pricing for existing licensees under the agreed terms.
## 4. **Fair Use & Compliance Verification**
(a) Businesses claiming non-commercial use may be subject to **HiveChain’s fair use verification** process.
(b) If HiveChain determines that a Licensee is using the Software commercially without a valid license, they will be required to obtain a commercial license retroactively or cease use.
(c) In cases where HiveChain incurs **legal or administrative costs** to enforce compliance, the Licensee shall be responsible for covering these costs in addition to the retroactive licensing fees.
(d) HiveChain provides an **anonymous reporting system** for users, investors, and collaborators to report suspected violations securely.
(e) Reports will be reviewed based on **community-driven oversight**, ensuring fair and transparent enforcement.
(f) Businesses found in violation may face **public disclosure of non-compliance**, reinforcing accountability and discouraging unethical practices.(b) If HiveChain determines that a Licensee is using the Software commercially without a valid license, they will be required to obtain a commercial license retroactively or cease use.
(c) In cases where HiveChain incurs **legal or administrative costs** to enforce compliance, the Licensee shall be responsible for covering these costs in addition to the retroactive licensing fees.
(d) HiveChain provides an **anonymous reporting system** for users, investors, and collaborators to report suspected violations securely.
(e) Reports will be reviewed based on **community-driven oversight**, ensuring fair and transparent enforcement.
(f) Businesses found in violation may face **public disclosure of non-compliance**, reinforcing accountability and discouraging unethical practices.
(a) Businesses claiming non-commercial use may be subject to **HiveChain’s fair use verification** process.
(b) If HiveChain determines that a Licensee is using the Software commercially without a valid license, they will be required to obtain a commercial license retroactively or cease use.
(c) In cases where HiveChain incurs **legal or administrative costs** to enforce compliance, the Licensee shall be responsible for covering these costs in addition to the retroactive licensing fees.
(d) HiveChain provides an **anonymous reporting system** for users, investors, and collaborators to report suspected violations securely.
(e) Reports will be reviewed based on **community-driven oversight**, ensuring fair and transparent enforcement.
(a) Businesses claiming non-commercial use may be subject to **HiveChain’s fair use verification** process.
(b) If HiveChain determines that a Licensee is using the Software commercially without a valid license, they will be required to obtain a commercial license retroactively or cease use.
(c) In cases where HiveChain incurs **legal or administrative costs** to enforce compliance, the Licensee shall be responsible for covering these costs in addition to the retroactive licensing fees.
(a) Businesses claiming non-commercial use may be subject to **HiveChain’s fair use verification** process.
(b) If HiveChain determines that a Licensee is using the Software commercially without a valid license, they will be required to obtain a commercial license retroactively or cease use.
## 5. **Modifications & Derivative Works**
(a) Any modifications, improvements, or derivative works created using the Software must remain under **HiveChain’s governance and cannot be relicensed under different terms**.
(b) Licensees may internally develop enhancements but cannot sell or distribute modified versions of HiveChain without explicit permission.
## 6. **Termination & Revocation**
(a) HiveChain reserves the right to terminate this license if the Licensee violates any terms of this Agreement.
(b) Licensees who fail to comply with the licensing requirements may have their license revoked and be subject to legal enforcement.
## 7. **No Warranty & Liability**
(a) The Software is provided "as is" without warranties of any kind.
(b) HiveChain is not liable for any damages resulting from the use of the Software.
## 8. **Amendments & Future Changes**
(a) HiveChain reserves the right to modify this Agreement for new customers, but **existing license holders will continue under the agreed pricing and terms**.
(b) Changes to licensing policies will be publicly disclosed and documented.
## 9. **Governing Law**
This Agreement is governed by and construed in accordance with applicable legal jurisdiction where HiveChain operates.
---
By obtaining and using a HiveChain commercial license, the Licensee acknowledges that they have read, understood, and agreed to these terms.
For questions regarding licensing, contact the HiveChain maintainers.
# FAQ
**What is HiveChain?**
HiveChain is an early-stage platform that aims to simplify the integration of advanced AI systems (such as large language models and other foundation models) into real-world business applications. It provides a framework for connecting AI models with existing processes and tools, enabling organizations to adopt AI capabilities more seamlessly and effectively.
**What problem does HiveChain solve?**
Many companies struggle to incorporate AI into their products or operations due to the complexity of integrating AI models with legacy systems and workflows ([Hive - The Blockchain & Cryptocurrency for Web3](https://hive.io/#:~:text=Hive%20is%20a%20DPoS%20powered,of%20dapps%2C%20communities%20%26%20individuals)). HiveChain addresses this by offering an integration layer that bridges the gap between AI models and business applications. This helps reduce technical barriers and the experimentation time needed to bring AI-driven features to market. By streamlining AI integration, HiveChain allows businesses to focus on leveraging AI insights and automation without reinventing their infrastructure.
**Who is HiveChain for?**
HiveChain is designed for AI engineers, developers, and forward-thinking organizations that want to embed AI capabilities into their services or internal processes. If you are building applications that could benefit from natural language understanding, predictive analytics, or other AI-powered functionalities, HiveChain can help you connect those AI models to your application logic in a reliable and scalable way. It is also valuable for researchers and contributors interested in applied AI, as it provides a collaborative platform to experiment with AI integrations.
**What is the current status of the project?**
HiveChain is currently in its Minimum Viable Product (MVP) phase. The core functionality is in place, and a basic version of the platform is available for demonstration and initial testing. However, resources are limited and the feature set is focused on proving the core concept. Users can expect the fundamental integration features to work, though some advanced capabilities may still be under development or refinement. We are transparent about our progress and limitations, and we encourage early adopters to provide feedback to help us improve.
**How can I contribute or get involved?**
We welcome contributions from developers and AI enthusiasts. If you are interested in contributing to the HiveChain codebase, start by reading the **Getting Started** guide and the **Contributing** guidelines in our documentation. There are opportunities to help with coding, documentation, testing, and providing feedback on the user experience. If you are not a developer, you can still get involved by trying out the platform, reporting issues, or suggesting improvements. For potential collaborators or partners, please reach out to our team to explore how we can work together.
**How is HiveChain different from other AI integration solutions?**
HiveChain focuses on being a dedicated integration layer for AI that is both developer-friendly and adaptable to various AI models. Unlike some platforms that offer pre-built AI services, HiveChain is model-agnostic: it can work with different AI APIs or open-source models, giving users the flexibility to choose the AI that best fits their needs. Additionally, HiveChain emphasizes transparency and collaboration. As an open project (with open-source components), it encourages community input and trust. Our goal is to provide a robust yet lightweight solution that complements existing tools rather than replacing them, making AI adoption more incremental and aligned with each organization’s pace ([Westinghouse Unveils Pioneering Nuclear Generative AI System](https://info.westinghousenuclear.com/news/westinghouse-unveils-pioneering-nuclear-generative-ai-system#:~:text=Westinghouse%20Electric%20Company%20launched%20its,to%20deliver%20custom%20GenAI)).
**What is HiveChain’s long-term vision?**
In the long term, HiveChain aims to evolve into a comprehensive ecosystem for AI integration. While our current focus is on delivering immediate value through the MVP, we see future potential in expanding HiveChain’s capabilities to support more complex workflows, multiple AI model types, and enterprise-level requirements. We foresee HiveChain helping to establish best practices for integrating AI in ways that align with business goals and ethical standards. Although we have ambitious ideas about shaping how businesses utilize AI, we are approaching this vision step by step. Our priority now is to build a solid foundation and demonstrate real-world usefulness. As we grow and learn from user feedback, we will carefully broaden the platform’s scope while maintaining reliability and trust.
```markdown
# Getting Started
This guide will help you set up HiveChain and run a simple example to understand how the platform works. As HiveChain is in an early MVP stage, the setup is straightforward but may require familiarity with basic development tools.
## Prerequisites
- **Operating System:** HiveChain is cross-platform, but it is primarily tested on modern 64-bit Linux distributions. It should also run on macOS and Windows with minimal adjustments.
- **Python 3.x Environment:** Ensure you have Python 3.8 or higher installed. Verify your Python version by running `python3 --version` in your terminal.
- **Dependencies:** HiveChain uses several Python libraries for interfacing with AI models and other services. These are listed in the `requirements.txt` file. An internet connection is required to download these dependencies.
## Installation
1. **Clone the Repository:**
```bash
git clone https://github.com/YourUsername/HiveChain.git
cd HiveChain
```
2. **Create a Virtual Environment (Optional):**
```bash
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
```
3. **Install Dependencies:**
```bash
pip install -r requirements.txt
```
4. **Basic Configuration:**
If HiveChain requires any configuration (for example, API keys for external AI services or paths to model files), copy the provided sample config (e.g., `config.example.yml`) to a new file (e.g., `config.yml`) and update it with your settings.
5. **Run an Example:**
To verify that everything is set up correctly, run the sample script or demo included in the repository:
```bash
python examples/hello_hivechain.py
```
Check the console output for confirmation that HiveChain initialized properly and that you received an AI-generated response.
6. **Explore the Platform:**
Once the example runs successfully, start exploring HiveChain further. Review the `README.md` for an overview of features and usage. The `docs/` directory (if available) contains more detailed explanations of HiveChain’s architecture and how to create custom integration workflows. Keep in mind that, as an early MVP, some features are limited or in-progress. If you encounter any issues or have questions, refer to our documentation or seek help from the community.
## Next Steps
- **Join the Community:**
Connect with us via our discussion forum or chat channels (links are available on our repository or website). Share your experiences, ask questions, and learn from other HiveChain users and contributors.
- **Stay Updated:**
As HiveChain is actively evolving, we recommend watching (or “starring”) the GitHub repository to receive notifications about new updates, fixes, and features. Regularly pull the latest changes if you are tracking the main branch.
- **Contribute:**
If you’re interested in contributing to HiveChain’s development, read the **Contributing** guide. There are many ways to help—from coding new features or connectors to improving documentation and providing feedback on design decisions.
```
# High-Level Roadmap
- **MVP Development (Current Stage):**
Our initial focus has been on building the core functionality of HiveChain. In this stage, we have developed a basic prototype that demonstrates how AI models can be connected to simple applications. The goal was to validate the concept with minimal features and gather early feedback. We have kept the scope lean to ensure the foundation is solid before expanding functionality.
- **Alpha Release and Feedback (Next Step):**
Once the MVP is stable, we plan to release an alpha version to a select group of early users and contributors. During this phase, the emphasis will be on testing HiveChain in real-world scenarios, identifying integration challenges, and collecting user feedback. We expect to refine the user experience, fix bugs, and add a few essential features based on the needs of our early adopters.
- **Beta Expansion (Upcoming):**
After incorporating feedback from the alpha phase, we aim to expand HiveChain’s capabilities for a beta release. This phase will likely include support for a broader range of AI models or providers, additional integration connectors (such as databases, APIs, or messaging systems), and improved scalability and security features. We anticipate that during the beta phase, HiveChain can be tested in more production-like environments, though it will still be an early-stage product used with appropriate caution.
- **Community Building and Partnerships:**
In parallel with the technical roadmap, we are actively building a community around HiveChain. This involves creating thorough documentation, establishing governance (including contribution guidelines and a Code of Conduct), and engaging with potential integration partners. We recognize that partnerships with AI service providers or early adopter businesses can greatly enhance HiveChain’s development. Given our resource constraints, we will progress deliberately, focusing on high-impact collaborations that validate our approach.
- **Full Release and Ongoing Improvement:**
Our ultimate goal is to reach a stable 1.0 release of HiveChain, signifying that the platform is robust enough for a wider range of applications. Leading up to this milestone, we will continuously improve the platform—optimizing performance, ensuring robust error handling, and fine-tuning integration workflows. After 1.0, the focus will shift to ongoing enhancements based on user needs and the evolving AI landscape. Potential future enhancements include more automated methods to align AI outputs with business objectives and tools to monitor AI performance in production. These ideas are part of our long-term vision, but we will introduce them carefully, keeping user feedback and reliability as top priorities.
*Note:* This high-level roadmap is subject to change. As an early-stage project, HiveChain will adapt to feedback and practical insights. Timelines are kept flexible to account for our limited team size and the complexity of integrating cutting-edge AI. Our priority is delivering a reliable and useful platform, even if that means adjusting milestones along the way.
# Developer Equity Incentive Plan (Draft)
*This document outlines our long-term vision for a developer equity incentive plan. Please note that this plan is a work in progress and is subject to further refinement, legal review, and potential changes as HiveChain evolves.*
## Overview
HiveChain is committed to building an innovative, transparent, and community-aligned organization. As part of our long-term vision, we plan to set aside a **5% equity pool** to reward early contributors—developers, designers, and other key collaborators—who help build and sustain our platform.
## Key Components
### 1. Convertible Notes with Future Token Conversion
- **Initial Instrument:**
Early contributions will be rewarded with convertible notes. These notes grant an equity stake in HiveChain and are designed to pay dividends based on the company’s performance.
- **Future Conversion to Tokenized Equity:**
At a later stage, once HiveChain has matured and our technical and legal frameworks are fully in place, these notes may be converted into tokenized coins.
*Disclaimer:* Conversion to tokens is not guaranteed; contributors may retain their notes if conversion is not feasible.
### 2. Dividend Payment System
- **Performance-Based Dividends:**
Dividends will be allocated based on the actual performance of the company. This means that dividends will be calculated using a transparent formula:
- **Dividends = (Company Revenue - Expenses) × [Pre-Defined Dividend Allocation Percentage]**
- **Escrow and Withdrawal Mechanism:**
- Dividends will be held in a secure escrow account.
- Contributors will receive a unique serial number and private key with their note.
- An API and user-friendly GUI will allow contributors to check their dividend balance.
- Withdrawals will only be enabled when the accumulated dividend amount exceeds a minimum threshold (to avoid frequent micro-withdrawals).
### 3. Long-Term Vision & Disclaimers
- **Aspirational, Not Immediate:**
This plan is designed as a long-term incentive mechanism. The details of the convertible notes, dividend formulas, and potential token conversion are subject to change as the project develops.
- **No Immediate Guarantee:**
While our goal is to implement this equity pool to reward early contributors, there is no guarantee that every element (e.g., token conversion) will be executed exactly as described. Changes may occur based on legal, technical, and business considerations.
- **Commitment to Transparency:**
In keeping with our open-book policy, all financial transactions, including dividend calculations and distributions, will be published publicly (to the extent legally permitted). This ensures that our process remains transparent and aligned with our core values.
## Conclusion
HiveChain’s Developer Equity Incentive Plan represents our commitment to rewarding genuine contributions and aligning the interests of our community with the success of the company. We believe that by building an organization where rewards are based on actual performance—and by maintaining transparency throughout—we can set a new standard for ethical and community-focused innovation.
*This document is a draft and will be updated over time. We welcome feedback from our community to help shape this initiative into a robust, fair, and sustainable incentive model.*
---
*Last Updated: [Insert Date]*
# Trademark Statement
"HiveChain" is the trademark (and brand name) associated with our AI integration platform. We ask that our community and the public respect the proper use of this name to avoid confusion and protect the project’s identity and reputation.
**Allowed Uses:**
You may use the name "HiveChain" to refer to the project, its open-source software, or the community in articles, tutorials, and discussions. For example, phrases like "built on HiveChain" or "using the HiveChain platform" are acceptable when accurately describing your integration or product in relation to our project.
**Prohibited Uses:**
You may not use the "HiveChain" name or logo in a way that suggests official endorsement, sponsorship, or affiliation where none exists. This includes not using "HiveChain" as part of your own product’s name, company name, or domain name if your project is unrelated to the HiveChain platform. Similarly, do not modify or incorporate our logo into your own project branding in a manner that could mislead others into thinking your project is officially connected to us.
We reserve the right to enforce our trademark rights to prevent misuse. These guidelines are not meant to discourage legitimate references to HiveChain, but rather to ensure that "HiveChain" clearly and consistently refers to our project.
If you have any questions about using the HiveChain name or logo, or if you wish to seek permission for a specific use, please contact the HiveChain team.
# Vision
HiveChain’s vision is to make advanced AI technologies accessible and beneficial to a wide range of businesses and developers by focusing on integration and alignment. We believe that the true value of AI emerges when cutting-edge models are effectively integrated into real-world operations, bridging the gap between theoretical potential and practical impact.
**Bridging AI and Real-World Needs:**
Today, many powerful AI models exist, but organizations struggle to apply them meaningfully within their existing systems ([doc/devs/create-operation.md - hive - GitLab](https://gitlab.syncad.com/hive/hive/-/blob/bw_serialize_transaction/doc/devs/create-operation.md#:~:text=doc%2Fdevs%2Fcreate,Define%20smt_elevate_account_operation%20structure)). Our platform is built to close this gap. HiveChain acts as the “glue” that connects AI models with the everyday tools and processes businesses rely on. By doing so, we help organizations leverage AI for tasks like automating customer support, analyzing data, or augmenting decision-making—without requiring a complete overhaul of their tech stack.
**Simplicity and Collaboration:**
A core principle of our vision is simplicity. We aim to abstract away the complexity of dealing with AI model APIs, data pipelines, and infrastructure, presenting users with a clean and developer-friendly interface. At the same time, HiveChain is built with collaboration in mind. We envision a community where AI engineers, data scientists, and domain experts come together to share integration templates, best practices, and plugins—accelerating innovation for everyone involved ([Hive AI Launch: What's Driving the Hype Behind BUZZ - OKX](https://www.okx.com/learn/hive-ai-hype-buzz-token-launch#:~:text=Hive%20AI%20was%20founded%20by,artificial%20intelligence%20in%20decentralized)).
**Ethical AI Alignment:**
We recognize that integrating AI is not just a technical challenge but also an ethical one. HiveChain is committed to facilitating AI integrations that are aligned with an organization’s values and policies. This means providing tools (even if basic at first) to help monitor AI outputs, manage biases, and ensure compliance with data privacy standards. As we evolve, we intend to incorporate more features that guide users toward responsible AI usage. While our current capabilities are limited, this commitment to ethical alignment underpins our long-term design philosophy.
**Long-Term Outlook:**
In the long run, we see HiveChain playing a role in shaping how businesses adapt to an AI-driven future. By lowering the barriers to AI adoption, we help more organizations benefit from AI advancements, contributing to broader industry transformation. Our future vision includes HiveChain becoming a standard layer in the enterprise tech stack for connecting AI, analogous to how databases or cloud services are standard today. We hint at possibilities such as novel forms of cross-organization AI collaboration, but our present focus remains firmly on delivering a trustworthy and effective product within its current scope. Each step we take is measured against this vision, ensuring that even early features are building blocks toward a more integrated and aligned AI future.
In summary, HiveChain strives to be more than just a tool—it aims to be a catalyst for change in how AI is adopted responsibly. We are starting small, but we are thinking big (with appropriate caution). Every improvement, line of code, and design decision is made with the overarching vision in mind: a world where adopting AI is easier, safer, and yields positive outcomes for businesses and society.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>HiveChain - Coming Soon</title>
<!-- Google Fonts -->
<link
href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@300;400;700&display=swap"
rel="stylesheet"
/>
<!-- Main CSS -->
<link rel="stylesheet" href="website/styles.css" />
<style>
/* Button to access codebase */
.codebase-button {
position: fixed;
bottom: 10px;
right: 10px;
padding: 8px 12px;
font-size: 12px;
background: #222;
color: #fff;
border: none;
cursor: pointer;
border-radius: 5px;
opacity: 0.4;
transition: opacity 0.3s;
}
.codebase-button:hover {
opacity: 1;
}
/* Codebase display modal */
.codebase-modal {
display: none;
position: fixed;
bottom: 50px;
right: 10px;
width: 300px;
max-height: 400px;
overflow-y: auto;
background: #1e1e1e;
color: #fff;
padding: 10px;
border-radius: 5px;
font-family: monospace;
font-size: 12px;
box-shadow: 0px 4px 6px rgba(0, 0, 0, 0.2);
white-space: pre-wrap;
}
</style>
</head>
<body>
<!-- Background Layers -->
<div class="background-image"></div>
<div class="background-overlay"></div>
<!-- Main Content -->
<div class="content">
<h1>HiveChain™</h1>
<p>Rethinking AI Development. Bringing Powerful AI Back to Your Front End.</p>
<p class="coming-soon">Coming Soon</p>
<!-- New "Why We're Different" Link -->
<p class="additional-link">
<a href="https://github.com/Wagner-HiveChain/hivechain/blob/main/docs/Vision.md" target="_blank">
Why We're Different
</a>
</p>
<!-- Call to Action -->
<div class="cta">
<a href="under-construction.html" class="cta-button">
Contribute on GitHub
</a>
<a href="mailto:wagner@hivechain.dev" class="cta-button secondary">
Sign Up for Updates
</a>
</div>
</div>
<!-- Codebase Button -->
<a href="codebase.html" class="codebase-button" target="_blank">View Codebase</a>
<!-- Codebase Display -->
<div id="codebaseModal" class="codebase-modal"></div>
<script>
async function toggleCodebase() {
let modal = document.getElementById("codebaseModal");
if (modal.style.display === "none" || modal.style.display === "") {
modal.style.display = "block";
if (!modal.innerText) {
try {
let response = await fetch("https://Wagner-HiveChain.github.io/hivechain/docs/combined_code.log");
if (response.ok) {
modal.innerText = await response.text();
} else {
modal.innerText = "Error: Unable to load codebase.";
}
} catch (error) {
modal.innerText = "Error: Unable to fetch code.";
}
}
} else {
modal.style.display = "none";
}
}
</script>
</body>
</html>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Under Construction | HiveChain</title>
<!-- Link to your external stylesheet -->
<link rel="stylesheet" href="website/styles.css">
<style>
/* Fallback styling if the external CSS doesn't load */
body {
margin: 0;
font-family: Arial, sans-serif;
background: #f7f7f7;
color: #333;
display: flex;
align-items: center;
justify-content: center;
height: 100vh;
text-align: center;
background-image: url('website/background.jpg');
background-size: cover;
background-position: center;
}
.container {
background: rgba(255, 255, 255, 0.85);
padding: 2rem;
border-radius: 8px;
box-shadow: 0 2px 10px rgba(0,0,0,0.1);
max-width: 600px;
margin: 1rem;
}
h1 {
font-size: 2.5rem;
margin-bottom: 1rem;
}
p {
font-size: 1.25rem;
margin-bottom: 1.5rem;
}
a {
text-decoration: none;
color: #fff;
background: #007BFF;
padding: 0.75rem 1.5rem;
border-radius: 5px;
transition: background 0.3s;
}
a:hover {
background: #0056b3;
}
</style>
</head>
<body>
<div class="container">
<h1>Under Construction</h1>
<p>We're working hard to build HiveChain—rethinking AI development and putting power back into your hands. Check back soon for updates!</p>
<a href="mailto:your-email@example.com">Sign Up for Updates</a>
</div>
</body>
</html>
/* ------------------------------- GLOBAL RESET ------------------------------- */
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
/* --------------------------- GLOBAL STYLING --------------------------- */
body {
font-family: "JetBrains Mono", monospace;
color: #ffffff;
background-color: #000000;
height: 100vh;
width: 100vw;
overflow: hidden; /* Prevents scrollbars if not needed; remove if scrolling is desired */
display: flex;
justify-content: center;
align-items: center;
text-align: center;
}
/* --------------------- BACKGROUND & OVERLAY LAYERS --------------------- */
.background-image {
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 100%;
background: url("background.webp") center center / cover no-repeat;
z-index: -2; /* Behind everything else */
}
.background-overlay {
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 100%;
background: rgba(0, 0, 0, 0.5); /* Dark overlay for readability */
z-index: -1; /* Stays behind the content but above the background image */
}
/* --------------------------- CONTENT STYLING --------------------------- */
.content {
max-width: 600px; /* Limits the content width for better readability */
padding: 0 1rem; /* Small horizontal padding for mobile */
}
.content h1 {
font-size: 2.5rem;
margin-bottom: 0.5rem;
}
.content p {
font-size: 1rem;
line-height: 1.5;
margin-bottom: 1rem;
}
.coming-soon {
font-weight: bold;
font-size: 1.1rem;
color: #f8d800; /* Slightly accent color to draw attention */
}
/* ----------------------- CTA (CALL TO ACTION) ----------------------- */
.cta {
margin-top: 2rem;
display: flex;
gap: 1rem;
justify-content: center;
flex-wrap: wrap; /* Allows buttons to wrap on smaller screens */
}
.cta-button {
text-decoration: none;
color: #ffffff;
background-color: #ff5c5c; /* Vibrant color for the main button */
padding: 0.75rem 1.5rem;
font-weight: bold;
border-radius: 4px;
transition: background-color 0.3s ease;
}
.cta-button:hover {
background-color: #ff7878;
}
.secondary {
background-color: #333333;
}
.secondary:hover {
background-color: #555555;
}
/* --------------------------- CODEBASE BUTTON & MODAL --------------------------- */
.codebase-button {
position: fixed;
bottom: 10px;
right: 10px;
padding: 8px 12px;
font-size: 12px;
background: #444;
color: #fff;
border: none;
cursor: pointer;
border-radius: 5px;
opacity: 0.3; /* Low visibility by default */
transition: opacity 0.3s;
}
.codebase-button:hover {
opacity: 1; /* Fully visible when hovered */
}
.codebase-modal {
display: none;
position: fixed;
bottom: 50px;
right: 10px;
width: 340px;
max-height: 450px;
overflow-y: auto;
background: #111; /* Matches dark theme */
color: #fff;
padding: 12px;
border-radius: 5px;
font-family: monospace;
font-size: 12px;
box-shadow: 0px 4px 6px rgba(0, 0, 0, 0.2);
white-space: pre-wrap; /* Ensures code formatting remains intact */
border: 1px solid #444;
}
/* --------------------------- MEDIA QUERIES --------------------------- */
/* Mobile & Smaller Devices */
@media (max-width: 480px) {
.content h1 {
font-size: 2rem;
}
.content p {
font-size: 0.95rem;
}
.cta-button {
width: 100%; /* Buttons stack and take full width on narrow screens */
text-align: center;
}
.codebase-modal {
width: 280px;
font-size: 10px;
}
}
/* Larger Screens / Ultra-Wide */
@media (min-width: 1500px) {
.content h1 {
font-size: 3rem;
}
.content p {
font-size: 1.25rem;
}
.cta-button {
font-size: 1.1rem;
padding: 1rem 2rem;
}
.codebase-modal {
width: 400px;
font-size: 14px;
}
}
/* ---------------------- ADDITIONAL LINK STYLING ---------------------- */
.additional-link {
margin-top: 1rem;
font-size: 1.1rem;
}
.additional-link a {
text-decoration: none;
color: #00aaff;
border-bottom: 1px solid transparent;
transition: border-bottom 0.3s ease;
}
.additional-link a:hover {
border-bottom: 1px solid #00aaff;
}
#!/usr/bin/env python3
"""
HiveChain Codebase HTML Generator (Refined Version)
This script scans the project directory, collects all relevant code files,
and generates a structured HTML file (`docs/codebase.html`) with proper formatting
for display on GitHub Pages. It ensures clear separation between files,
syntax highlighting via Prism.js, and easy navigation.
"""
from pathlib import Path
from html import escape
from datetime import datetime
# ------------------------------------------------
# STEP 1: DEFINE DIRECTORIES & SETTINGS
# ------------------------------------------------
PROJECT_ROOT = Path("c:/projects/hivechain")
DOCS_DIR = PROJECT_ROOT / "docs"
DOCS_DIR.mkdir(exist_ok=True)
OUTPUT_FILE = DOCS_DIR / "codebase.html"
# Expanded lists of directories and files to exclude
EXCLUDED_DIRS = {
"__pycache__", "tests", "migrations", "node_modules",
".git", ".vscode", ".idea", "__MACOSX"
}
EXCLUDED_FILES = {
".env", ".DS_Store", "Thumbs.db", "codebase.html", "combined_code.log"
}
# Only gather files with these extensions
VALID_EXTENSIONS = {".py", ".json", ".md", ".yaml", ".html", ".css"}
# Mapping file extensions to Prism syntax classes
EXT_TO_PRISM = {
".py": "language-python",
".json": "language-json",
".md": "language-markdown",
".yaml": "language-yaml",
".html": "language-html",
".css": "language-css",
}
# ------------------------------------------------
# STEP 2: COLLECT RELEVANT FILES
# ------------------------------------------------
def collect_files(root_dir: Path):
"""
Recursively collect valid files while ignoring excluded directories and files.
Sorted by path for consistent ordering in the output.
"""
all_files = root_dir.rglob("*")
# Filter out directories and files we don't want
relevant_files = []
for f in sorted(all_files, key=lambda p: p.as_posix()):
# Skip if it's not a file
if not f.is_file():
continue
# Skip if file extension is not in VALID_EXTENSIONS
if f.suffix.lower() not in VALID_EXTENSIONS:
continue
# Skip if file name is in EXCLUDED_FILES
if f.name in EXCLUDED_FILES:
continue
# Skip if any part of the path is in EXCLUDED_DIRS
if any(ex_dir in f.parts for ex_dir in EXCLUDED_DIRS):
continue
# Also skip if this is the output file
if f.name == OUTPUT_FILE.name:
continue
relevant_files.append(f)
return relevant_files
# ------------------------------------------------
# STEP 3: PROCESS FILE CONTENTS
# ------------------------------------------------
def process_files(file_list):
"""
Read and process each file to store its escaped content and syntax class.
"""
file_data = []
for fpath in file_list:
try:
content = fpath.read_text(encoding="utf-8")
except UnicodeDecodeError:
print(f"Warning: Unable to read {fpath} due to encoding issues.")
continue
except Exception as e:
print(f"Warning: Unable to read {fpath}: {e}")
continue
file_data.append({
"name": fpath.relative_to(PROJECT_ROOT).as_posix(),
"syntax_class": EXT_TO_PRISM.get(fpath.suffix.lower(), "language-none"),
"content": escape(content),
})
return file_data
# ------------------------------------------------
# STEP 4: GENERATE HTML OUTPUT
# ------------------------------------------------
def generate_html(file_data):
"""
Generate structured HTML with a Table of Contents, collapsible sections,
syntax highlighting, and navigation links.
"""
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
html_parts = [
"<!DOCTYPE html>",
"<html lang='en'>",
" <head>",
" <meta charset='UTF-8'>",
" <meta name='viewport' content='width=device-width, initial-scale=1.0'>",
" <title>HiveChain Codebase</title>",
" <!-- Prism.js theme -->",
" <link href='https://cdnjs.cloudflare.com/ajax/libs/prism/1.29.0/themes/prism-tomorrow.min.css' rel='stylesheet' />",
" </head>",
" <body>",
" <a id='top'></a>",
" <h1>HiveChain Codebase</h1>",
f" <p>Last updated: {timestamp}</p>",
" <h2>Table of Contents</h2>",
" <ul>",
]
# Build the Table of Contents
for i, fdata in enumerate(file_data):
file_id = f"file-{i}"
html_parts.append(f" <li><a href='#{file_id}'>{fdata['name']}</a></li>")
html_parts.append(" </ul>")
# Build the collapsible code sections
for i, fdata in enumerate(file_data):
file_id = f"file-{i}"
html_parts.append(" <hr>")
html_parts.append(f" <h2 id='{file_id}'>{fdata['name']}</h2>")
# Use a <details> section to make the code collapsible
html_parts.append(" <details open>")
html_parts.append(" <summary>Show/Hide Code</summary>")
html_parts.append(" <br>")
# Syntax-highlighted code block
html_parts.append(
f" <pre><code class='{fdata['syntax_class']}'>{fdata['content']}</code></pre>"
)
html_parts.append(" </details>")
# Back to top link
html_parts.append(" <p><a href='#top'>Back to Top</a></p>")
# Include the Prism.js scripts
html_parts.append(" <script src='https://cdnjs.cloudflare.com/ajax/libs/prism/1.29.0/prism.min.js'></script>")
html_parts.append(" <script src='https://cdnjs.cloudflare.com/ajax/libs/prism/1.29.0/components/prism-python.min.js'></script>")
html_parts.append(" <script src='https://cdnjs.cloudflare.com/ajax/libs/prism/1.29.0/components/prism-markdown.min.js'></script>")
html_parts.append(" <script src='https://cdnjs.cloudflare.com/ajax/libs/prism/1.29.0/components/prism-json.min.js'></script>")
html_parts.append(" <script src='https://cdnjs.cloudflare.com/ajax/libs/prism/1.29.0/components/prism-yaml.min.js'></script>")
html_parts.append(" <script src='https://cdnjs.cloudflare.com/ajax/libs/prism/1.29.0/components/prism-css.min.js'></script>")
html_parts.append(" <script src='https://cdnjs.cloudflare.com/ajax/libs/prism/1.29.0/components/prism-markup.min.js'></script>")
# Initialize Prism after loading the language components
html_parts.append(" <script>")
html_parts.append(" document.addEventListener('DOMContentLoaded', function() {")
html_parts.append(" Prism.highlightAll();")
html_parts.append(" });")
html_parts.append(" </script>")
html_parts.append(" </body>")
html_parts.append("</html>")
return "\n".join(html_parts)
# ------------------------------------------------
# STEP 5: WRITE TO OUTPUT FILE
# ------------------------------------------------
def write_output(html_result):
"""
Write the generated HTML to docs/codebase.html.
"""
OUTPUT_FILE.write_text(html_result, encoding="utf-8")
# ------------------------------------------------
# STEP 6: PRINT SUCCESS MESSAGE
# ------------------------------------------------
def main():
print("Collecting files...")
files = collect_files(PROJECT_ROOT)
print("Processing files...")
processed_data = process_files(files)
print("Generating HTML content...")
html_result = generate_html(processed_data)
print("Writing to output file...")
write_output(html_result)
print(f"Success! Codebase HTML generated at: {OUTPUT_FILE}")
print("Preview of generated HTML:")
print("-" * 50)
print(html_result[:500]) # Print first 500 characters for preview
print("-" * 50)
if __name__ == "__main__":
main()
#!/usr/bin/env python3
# run.py
"""
Module: run.py
Responsibility:
- Provide an interactive CLI that uses the complete HiveChain pipeline.
- Force the use of the GPT-4o mini model.
- Handle user conversation, exit gracefully, and print the standardized output with HiveChain metadata.
"""
import sys
from hivechain.config_handler import load_config
from hivechain.hivechain_core import init_config
from hivechain.pipeline import process_request
def main():
# Load configuration and initialize the library.
config = load_config()
init_config(config)
# Force using the GPT-4o mini model.
model = "gpt-4o-mini"
print("Welcome to the HiveChain GPT-4o Mini CLI!")
print("Type your message and press Enter. Type 'exit' or 'quit' to end the conversation.\n")
while True:
try:
prompt = input("You: ")
if prompt.strip().lower() in ["exit", "quit"]:
print("Exiting conversation.")
break
# Process the request through the full pipeline with metadata wrapping enabled.
response = process_request(raw_input=prompt, model_name=model, wrap_response=True)
# Print the standardized output (assuming "result" holds the generated text).
print("GPT-4o Mini:", response.get("result", ""), "\n")
if response.get("fallback_used", False):
print("Warning: Fallback formatting was applied to the input.\n")
except KeyboardInterrupt:
print("\nConversation interrupted. Goodbye!")
sys.exit(0)
except Exception as e:
print("Error:", e, file=sys.stderr)
if __name__ == "__main__":
main()
from setuptools import setup, find_packages
import os
# Read the long description from README.md if available
this_directory = os.path.abspath(os.path.dirname(__file__))
with open(os.path.join(this_directory, "README.md"), encoding="utf-8") as fh:
long_description = fh.read()
setup(
name="hivechain",
version="0.1.0",
author="Laura Wagner",
author_email="wagner@hivechain.dev",
description="HiveChain: A Modular AI Orchestration Framework for Transparent and easy to use.",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://www.hivechain.dev",
# This will include all packages under src, including hivechain and its submodules (e.g., provider_adapters)
packages=find_packages(where="src"),
package_dir={"": "src"},
classifiers=[
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
],
python_requires=">=3.7",
install_requires=[
"openai",
"python-dotenv",
],
entry_points={
"console_scripts": [
"hivechain-run=hivechain.cli:main",
]
},
)
# agent_manager.py
"""
Module: agent_manager.py
Description:
Manages smart agents and multi-agent workflows.
Defines a base Agent interface (e.g., Agent.generate(prompt, context)) and a stub implementation.
Also provides an AgentManager to register agents and delegate tasks.
Key Tasks:
- Define an Agent interface with an abstract method generate(prompt, context).
- Provide a SimpleAgent as a stub implementation.
- Implement an AgentManager to manage multiple agents and aggregate their responses.
Keywords: agents, multi-agent, coordination, plugin.
#Placeholder: Extend to support orchestration of multiple agents, task delegation strategies, and result aggregation.
"""
from abc import ABC, abstractmethod
class Agent(ABC):
"""
Abstract base class for agents.
Each agent should implement the generate method to produce a response given a prompt and optional context.
"""
@abstractmethod
def generate(self, prompt: str, context: dict = None) -> str:
"""
Generate a response based on the provided prompt and optional context.
Args:
prompt (str): The input prompt for the agent.
context (dict, optional): Additional context or memory for the agent.
Returns:
str: The generated response.
"""
pass
class SimpleAgent(Agent):
"""
A simple stub agent implementation.
Returns a placeholder response indicating that the agent received the prompt.
#Placeholder: Replace this stub with actual agent logic.
"""
def generate(self, prompt: str, context: dict = None) -> str:
return f"[Placeholder Response] Received prompt: {prompt}"
class AgentManager:
"""
Manages multiple agents and orchestrates multi-agent workflows.
"""
def __init__(self):
self.agents = [] # List to store registered agents.
def register_agent(self, agent: Agent):
"""
Registers an agent with the manager.
Args:
agent (Agent): An instance of a subclass of Agent.
"""
self.agents.append(agent)
def delegate_task(self, prompt: str, context: dict = None) -> dict:
"""
Delegates a task to all registered agents and aggregates their responses.
Args:
prompt (str): The task prompt.
context (dict, optional): Additional context to pass to each agent.
Returns:
dict: A dictionary mapping agent identifiers (e.g., "agent_0") to their generated responses.
#Placeholder: Enhance with logic for selecting a subset of agents, parallel execution, or more advanced aggregation.
"""
responses = {}
for idx, agent in enumerate(self.agents):
responses[f"agent_{idx}"] = agent.generate(prompt, context)
return responses
# For local testing, you can uncomment the block below:
# if __name__ == "__main__":
# agent_manager = AgentManager()
# agent_manager.register_agent(SimpleAgent())
# agent_manager.register_agent(SimpleAgent())
# result = agent_manager.delegate_task("What is the weather today?", {"location": "San Francisco"})
# print("Agent responses:", result)
# api_caller.py
"""
Module: api_caller.py
Responsibility:
- Abstract the API call logic so that the rest of the pipeline remains agnostic
to the underlying API details.
- Retrieve model configuration from the global configuration.
- Securely set up API keys and endpoints based on the model type.
- Clamp parameters (e.g., temperature, max_tokens) using configuration limits.
- Delegate the API call to the appropriate provider adapter.
TODO:
- Enhance error handling and support for additional model types as needed. #Placeholder
- Adapt the API call to any future changes in the underlying API interface. #Placeholder
"""
import os
from hivechain.hivechain_core import get_config
def call_api(structured_prompt: dict, model_name: str, temperature: float = None, max_tokens: int = None) -> dict:
"""
Given a structured prompt and model configuration, delegate the API call to the appropriate provider adapter.
Args:
structured_prompt (dict): The dictionary with at least a "prompt" key.
model_name (str): The name of the model to use (must be present in the configuration under the default provider).
temperature (float, optional): Override the default temperature.
max_tokens (int, optional): Override the default max tokens.
Returns:
dict: The raw API response as returned by the provider adapter.
"""
# Retrieve the global configuration.
config = get_config()
# Determine the default provider (e.g., "openai") from the configuration.
provider_name = config.get("default_provider", "openai")
provider_cfg = config["providers"].get(provider_name)
if provider_cfg is None:
raise ValueError(f"Provider '{provider_name}' is not configured in the configuration.")
# Retrieve the model configuration from the provider's models.
model_cfg = provider_cfg["models"].get(model_name)
if model_cfg is None:
raise ValueError(f"Model '{model_name}' is not defined under provider '{provider_name}'.")
# Set temperature and max_tokens using defaults if not provided.
temperature = temperature if temperature is not None else model_cfg.get("default_temperature", 0.7)
max_tokens = max_tokens if max_tokens is not None else model_cfg.get("default_max_tokens", 1000)
# Validate and clamp parameters using the limits defined in the configuration.
limits = config.get("parameter_limits", {})
if "temperature" in limits:
minT = limits["temperature"].get("min", 0.0)
maxT = limits["temperature"].get("max", 1.0)
if temperature < minT or temperature > maxT:
print(f"Clamping temperature to [{minT}, {maxT}]")
temperature = max(min(temperature, maxT), minT)
if "max_tokens" in limits:
minN = limits["max_tokens"].get("min", 1)
maxN = limits["max_tokens"].get("max", 2048)
if max_tokens < minN or max_tokens > maxN:
print(f"Clamping max_tokens to [{minN}, {maxN}]")
max_tokens = max(min(max_tokens, maxN), minN)
# Set up API keys and endpoints based on provider type.
if model_cfg["type"] == "openai":
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
openai.api_base = "https://api.openai.com/v1"
elif model_cfg["type"] == "deepseek":
import openai
openai.api_key = os.getenv("DEEPSEEK_API_KEY")
openai.api_base = model_cfg.get("api_base", "https://api.deepseek.com")
else:
raise NotImplementedError(f"Model type '{model_cfg['type']}' is not supported in the adapter layer.")
# Build the request payload.
# Assumes structured_prompt contains a key "prompt" with the text.
messages = [{"role": "user", "content": structured_prompt["prompt"]}]
# Delegate the API call to the appropriate provider adapter.
# For this version, we assume the default provider is "openai" or "deepseek".
if model_cfg["type"] == "openai":
from hivechain.provider_adapters.openai_provider import generate_text as provider_generate
elif model_cfg["type"] == "deepseek":
from hivechain.provider_adapters.deepseek_provider import generate_text as provider_generate
else:
raise NotImplementedError(f"Provider adapter for model type '{model_cfg['type']}' is not implemented.")
response = provider_generate(structured_prompt["prompt"], {
"engine": model_cfg["engine"],
"temperature": temperature,
"max_tokens": max_tokens,
**({"api_base": model_cfg.get("api_base")} if model_cfg["type"] == "deepseek" else {})
})
# Placeholder: Additional error handling can be added here if needed. #Placeholder
return response
# For local testing, you can uncomment the block below:
# if __name__ == "__main__":
# # Simulate a structured prompt from the input formatter.
# test_prompt = {"prompt": "Tell me a joke about programming.", "fallback": False, "details": "Standard formatting applied."}
# try:
# result = call_api(test_prompt, model_name="gpt-4o-mini")
# print(result)
# except Exception as e:
# print("Error during API call:", e)
# cli.py
"""
Module: cli.py
Responsibility:
- Provide a command-line interface (CLI) for interacting with the HiveChain pipeline.
- Load configuration, initialize the library, parse user input, call the pipeline,
and display the final output.
- This CLI leverages the new modular pipeline approach and allows toggling metadata wrapping.
"""
import argparse
from .config_handler import load_config
from .hivechain_core import init_config
from .pipeline import process_request
def main():
# Load configuration (from config.json, .env, etc.)
config = load_config()
# Initialize the library's configuration (immutable by default)
init_config(config)
parser = argparse.ArgumentParser(description="HiveChain AI Conversation CLI")
parser.add_argument("prompt", type=str, help="User prompt for the AI model")
parser.add_argument("--model", choices=config["models"].keys(),
help="Which model to use (as defined in config.json; defaults to config's default)")
parser.add_argument("--temperature", type=float, help="Temperature for generation")
parser.add_argument("--max-tokens", type=int, dest="max_tokens", help="Max tokens for the response")
parser.add_argument("--no-wrap", action="store_true",
help="Return raw API response without HiveChain metadata (default wraps response)")
args = parser.parse_args()
try:
# Determine wrap_response: if --no-wrap is provided, disable metadata wrapping.
wrap_response = not args.no_wrap
# Process the request through the complete pipeline.
response = process_request(
raw_input=args.prompt,
model_name=args.model,
temperature=args.temperature,
max_tokens=args.max_tokens,
wrap_response=wrap_response
)
# If the response is wrapped (a dict with a "result" key), print the result and any metadata.
if isinstance(response, dict) and "result" in response:
model_used = args.model if args.model else config.get("default_model", "openai")
print(f"{model_used} response:\n{response.get('result', '')}\n")
if response.get("fallback_used", False):
print("Warning: Fallback formatting was applied to the input.")
else:
# Otherwise, assume it's the raw API response and print it directly.
print(response)
except Exception as e:
print(f"Error: {e}")
if __name__ == "__main__":
main()
# config_handler.py
import json
import os
from dotenv import load_dotenv
def load_config(config_path="config.json"):
"""Load configuration from a JSON file and environment variables."""
load_dotenv() # Load environment variables from .env file
try:
with open(config_path, "r") as f:
config = json.load(f)
except (FileNotFoundError, json.JSONDecodeError):
raise RuntimeError("Error loading configuration file.")
# Inject API keys from environment variables
for model_name, model_data in config.get("models", {}).items():
env_key = f"{model_name.upper()}_API_KEY"
model_data["api_key"] = os.getenv(env_key, "")
return config
# config_validator.py
"""
Module: config_validator.py
Description:
Validate the structure and contents of config.json at startup.
Checks for required keys and proper types, providing clear error messages if any
requirements are not met.
#Placeholder: Consider integrating JSON Schema validation in the future for more robust checks.
"""
def validate_config(config: dict) -> bool:
"""
Validates the configuration dictionary.
Args:
config (dict): The configuration dictionary loaded from config.json.
Returns:
bool: True if the configuration is valid.
Raises:
ValueError: If any required key is missing or has an incorrect type.
"""
# Check for required top-level keys.
required_top_keys = ["default_provider", "providers", "parameter_limits", "features"]
for key in required_top_keys:
if key not in config:
raise ValueError(f"Missing top-level key: '{key}' in configuration.")
# Validate default_provider is a string.
if not isinstance(config["default_provider"], str):
raise ValueError("The 'default_provider' should be a string.")
# Validate that providers is a dictionary.
providers = config["providers"]
if not isinstance(providers, dict):
raise ValueError("The 'providers' key should be an object (dict).")
# Check each provider.
for provider_name, provider_conf in providers.items():
if not isinstance(provider_conf, dict):
raise ValueError(f"Provider '{provider_name}' should be a dict.")
if "api_key_env" not in provider_conf:
raise ValueError(f"Provider '{provider_name}' missing 'api_key_env'.")
if "models" not in provider_conf:
raise ValueError(f"Provider '{provider_name}' missing 'models' key.")
models = provider_conf["models"]
if not isinstance(models, dict):
raise ValueError(f"'models' under provider '{provider_name}' should be a dict.")
for model_name, model_conf in models.items():
# Check required keys for each model.
required_model_keys = ["engine", "type", "default_temperature", "max_token_input", "max_token_output", "per_token"]
for key in required_model_keys:
if key not in model_conf:
raise ValueError(f"Model '{model_name}' under provider '{provider_name}' is missing key: '{key}'.")
# Validate that per_token is a dictionary.
per_token = model_conf["per_token"]
if not isinstance(per_token, dict):
raise ValueError(f"'per_token' for model '{model_name}' under provider '{provider_name}' should be a dict.")
for token_key in ["input", "cached_input", "output"]:
if token_key not in per_token:
raise ValueError(f"'per_token' for model '{model_name}' missing key '{token_key}'.")
# Validate parameter_limits.
parameter_limits = config["parameter_limits"]
if not isinstance(parameter_limits, dict):
raise ValueError("'parameter_limits' should be a dict.")
for param in ["temperature", "max_tokens"]:
if param not in parameter_limits:
raise ValueError(f"Missing '{param}' in 'parameter_limits'.")
limits = parameter_limits[param]
if not isinstance(limits, dict):
raise ValueError(f"'{param}' in 'parameter_limits' should be a dict.")
for bound in ["min", "max"]:
if bound not in limits:
raise ValueError(f"'{param}' in 'parameter_limits' is missing '{bound}'.")
# Validate features.
features = config["features"]
if not isinstance(features, dict):
raise ValueError("'features' should be a dict.")
for feature in ["use_memory", "use_retrieval"]:
if feature not in features:
raise ValueError(f"Missing feature flag: '{feature}' in 'features'.")
return True
# For local testing: Run this module directly to validate config.json.
if __name__ == "__main__":
import json
try:
with open("config.json", "r", encoding="utf-8") as f:
config_data = json.load(f)
if validate_config(config_data):
print("Configuration is valid.")
except Exception as e:
print("Configuration validation error:", e)
# fallback_formatter.py
"""
Module: fallback_formatter.py
Responsibility:
- When raw input fails validation, this module uses a fallback formatting agent
(e.g., a cost-effective model like GPT-4o mini) to reformat and sanitize the input.
- It inserts placeholder tags (e.g., {sanitized:object.type}) where necessary and
produces a structured prompt that the downstream API caller can handle.
- In this stub, we simulate the fallback behavior. In a complete implementation, the
function would call the formatting agent and process its output.
"""
def sanitize_input(raw_input: str) -> dict:
"""
Processes raw input using a fallback mechanism to generate a structured prompt.
Args:
raw_input (str): The raw user input that failed validation.
Returns:
dict: A dictionary with the following keys:
- "prompt": The sanitized and formatted prompt ready for the API caller.
- "fallback": A boolean flag indicating that fallback formatting was applied.
- "details": (Optional) Additional details on what was sanitized or any tags inserted.
TODO:
- Integrate an API call to a cost-effective model (e.g., GPT-4o mini) to perform the formatting.
- Insert tags or annotations (e.g., {sanitized:object.type}) where parts of the input are unclear or redacted.
- Handle potential errors from the formatting agent and provide informative messages.
"""
# For now, we simulate fallback formatting by simply stripping the input and appending a note.
sanitized_prompt = raw_input.strip()
# Placeholder for API call to formatting agent:
# response = call_formatting_agent(raw_input) # #Placeholder: integrate actual agent call
# sanitized_prompt = response.get("formatted_prompt", sanitized_prompt)
# details = response.get("details", "Fallback formatting applied; details not available.") # #Placeholder
# For this stub, we simply append a note indicating fallback was used.
sanitized_prompt += "\n\n[Note: Fallback formatting applied. Some content may have been sanitized.]"
return {
"prompt": sanitized_prompt,
"fallback": True,
"details": "Fallback formatting applied. Replace with agent response when implemented. #Placeholder"
}
# For quick local testing, you can uncomment the block below:
# if __name__ == "__main__":
# test_input = "raw, messy input that does not conform to schema..."
# result = sanitize_input(test_input)
# print(result)
# hivechain_core.py
"""
Module: hivechain_core.py
Description:
Manages the global configuration and shared state for HiveChain.
By default, the configuration is made immutable to ensure consistency and avoid accidental modifications.
However, an optional parameter allows disabling immutability if dynamic updates are required.
Keywords: singleton, global state, initialization, immutable.
"""
from types import MappingProxyType
_config = None
def init_config(config: dict, immutable: bool = True):
"""
Initialize the library's configuration.
Must be called once by the application before using other library functions.
Args:
config (dict): The configuration dictionary to be set as the global configuration.
immutable (bool, optional): If True (default), the configuration will be made immutable.
Set to False to allow modifications at runtime.
"""
global _config
if immutable:
_config = MappingProxyType(config)
else:
_config = config
def get_config():
"""
Retrieve the current configuration.
Returns:
dict: The global configuration.
Raises:
RuntimeError: If the configuration has not been initialized.
"""
if _config is None:
raise RuntimeError("Configuration not initialized. Please call init_config() first.")
return _config
# input_formatter.py
"""
Module: input_formatter.py
Responsibility:
- Process raw user input and return a structured prompt.
- Validate input against expected criteria.
- Mark the input as valid or not, so the pipeline can decide whether to trigger fallback formatting.
"""
def format_input(raw_input: str) -> dict:
"""
Processes the raw input into a structured prompt.
Args:
raw_input (str): The raw user input.
Returns:
dict: A dictionary containing:
- "prompt": the processed text.
- "valid": a boolean flag indicating whether the input fits our expected schema.
Current simple rule:
- If the stripped input has fewer than 10 characters, it is considered not valid.
#Placeholder: Add more complex validation logic (e.g., regex checks, JSON schema validation) as needed.
"""
cleaned_input = raw_input.strip()
# Simple rule: inputs shorter than 10 characters are flagged as not valid.
if len(cleaned_input) < 10:
return {"prompt": cleaned_input, "valid": False}
# Otherwise, consider it valid.
return {"prompt": cleaned_input, "valid": True}
# For testing purposes, you can uncomment the block below:
# if __name__ == "__main__":
# test_input = "Hello"
# print(format_input(test_input))
# memory_manager.py
"""
Module: memory_manager.py
Description:
Manages conversation memory for multi-turn interactions using a simple rolling window.
For now, this implementation is list-based and maintains a history of conversation messages.
If the total token count exceeds a defined maximum, older messages are removed.
Key Tasks:
- Append messages to memory.
- Return the conversation history.
- Enforce a maximum token count (using a placeholder token counting method).
#Placeholder: Replace the simple whitespace-based token count with a more accurate tokenizer.
#Placeholder: In the future, swap this implementation with a vector store or a persistent memory solution.
"""
class MemoryManager:
def __init__(self, max_tokens: int = 2048):
"""
Initializes the MemoryManager with a maximum token limit.
Args:
max_tokens (int): The maximum total token count for the conversation memory.
"""
self.max_tokens = max_tokens
self.memory = [] # List of messages, each a dict with keys 'role' and 'content'.
def _count_tokens(self, text: str) -> int:
"""
Counts tokens in a given text using a simple whitespace split.
#Placeholder: Replace with a proper tokenization method.
Args:
text (str): The text to count tokens from.
Returns:
int: The number of tokens.
"""
return len(text.split())
def add_message(self, role: str, content: str):
"""
Adds a message to the conversation memory. If the total token count exceeds max_tokens,
older messages are removed until the total is within the limit.
Args:
role (str): The role of the message sender (e.g., 'user', 'assistant').
content (str): The content of the message.
"""
new_message = {"role": role, "content": content}
self.memory.append(new_message)
# Enforce the maximum token limit.
total_tokens = sum(self._count_tokens(msg["content"]) for msg in self.memory)
while total_tokens > self.max_tokens and self.memory:
removed = self.memory.pop(0)
total_tokens = sum(self._count_tokens(msg["content"]) for msg in self.memory)
def get_memory(self) -> list:
"""
Returns the current conversation history.
Returns:
list: A copy of the conversation memory.
"""
return self.memory.copy()
def reset_memory(self):
"""
Clears the entire conversation memory.
"""
self.memory.clear()
# Singleton instance for easy use across the application.
memory_manager = MemoryManager()
# For local testing, you can uncomment the block below:
# if __name__ == "__main__":
# memory_manager.add_message("user", "Hello, how are you?")
# memory_manager.add_message("assistant", "I'm good, thank you!")
# print("Current Memory:", memory_manager.get_memory())
# memory_manager.reset_memory()
# print("Memory after reset:", memory_manager.get_memory())
# output_formatter.py
"""
Module: output_formatter.py
Responsibility:
- Transform the raw API response (e.g., from openai.ChatCompletion.create) into a standardized output.
- Extract key pieces of information such as the generated text.
- Include metadata (e.g., whether fallback formatting was used) if applicable.
TODO:
- Enhance error handling as needed for different API response formats.
- Optionally add more detailed logging or extraction of additional metadata.
"""
def format_output(api_response: dict) -> dict:
"""
Transforms the raw API response into a standardized format.
Args:
api_response (dict): The raw response from the API caller.
Returns:
dict: A dictionary containing:
- "result": The generated text from the API.
- "raw": The full, unmodified API response.
- Optionally, you can add extra keys for metadata (e.g., fallback_used, processing_time, etc.)
Raises:
ValueError: If the expected keys are not found in the API response.
"""
try:
# Extract the generated text.
# This assumes the response structure is similar to OpenAI's:
# {'choices': [{'message': {'content': <generated_text>}}], ... }
generated_text = api_response['choices'][0]['message']['content']
except (KeyError, IndexError) as e:
raise ValueError("Unexpected API response format. Ensure the API response conforms to expected structure.") from e
# Construct and return the standardized output.
return {
"result": generated_text,
"raw": api_response,
# Additional metadata can be added here, for example:
# "fallback_used": <True/False>, "details": "..." etc.
}
# For local testing, you can uncomment the block below:
# if __name__ == "__main__":
# # Simulate a raw API response (this structure should match the actual API's response)
# simulated_response = {
# "choices": [
# {"message": {"content": "This is a sample generated text."}}
# ],
# "usage": {"total_tokens": 50}
# }
# output = format_output(simulated_response)
# print(output)
# pipeline.py
"""
Module: pipeline.py
Overview:
This module orchestrates the complete processing of raw user input through the HiveChain pipeline.
It validates and formats the input (using a standard or fallback formatter), calls the API via the
provider adapter through the api_caller module, and formats the API response.
Optionally, it wraps the response with additional HiveChain metadata.
Responsibilities:
1. Input Processing:
- Validate raw input using input_formatter.
- Use fallback_formatter if the input does not match expected schema.
- Otherwise, use standard_formatter.
2. API Invocation:
- Determine the model to use (defaulting to the configuration if unspecified).
- Call the backend API via the api_caller module.
3. Response Wrapping:
- If wrap_response is True (or if the raw response is not dict-like), return a dictionary with:
"raw": the full API response,
"result": the extracted generated text,
"fallback_used": a flag indicating if fallback formatting was applied.
- Otherwise, return the raw API response.
Usage:
Call process_request() with the raw input and any optional overrides.
Set wrap_response=True to obtain HiveChain metadata along with the API response.
"""
from hivechain.input_formatter import format_input
from hivechain.fallback_formatter import sanitize_input
from hivechain.standard_formatter import standard_format_input
from hivechain.api_caller import call_api
# Note: output_formatter is available for further processing if needed.
from hivechain.hivechain_core import get_config
def process_request(raw_input: str, model_name: str = None, temperature: float = None,
max_tokens: int = None, wrap_response: bool = False) -> dict:
"""
Processes raw user input through the complete HiveChain pipeline.
Args:
raw_input (str): The raw user input.
model_name (str, optional): The model name to use. If None, defaults to the configuration's default.
temperature (float, optional): Override for temperature.
max_tokens (int, optional): Override for maximum tokens.
wrap_response (bool, optional): If True, returns a dictionary containing:
- "raw": the raw API response,
- "result": the extracted generated text,
- "fallback_used": a flag indicating if fallback formatting was applied.
Defaults to False (returning the raw response).
Returns:
dict: Either the raw API response or a wrapped response with metadata.
"""
# Step 1: Validate and format the input.
initial_structured = format_input(raw_input)
if not initial_structured.get("valid", False):
print("Warning: Input did not conform to expected schema. Using fallback formatter.")
structured_prompt = sanitize_input(raw_input)
else:
structured_prompt = standard_format_input(raw_input)
# Step 2: Determine which model to use.
if model_name is None:
config = get_config()
model_name = config.get("default_model", "openai")
# Step 3: Call the API using the structured prompt.
raw_response = call_api(structured_prompt, model_name, temperature, max_tokens)
# Debug: Uncomment the lines below for debugging purposes.
# print("DEBUG: wrap_response =", wrap_response)
# print("DEBUG: type(raw_response) =", type(raw_response))
# Step 4: Wrap the response if requested or if the raw response is not dict-like.
if wrap_response or not hasattr(raw_response, "get"):
try:
generated_text = raw_response.choices[0].message.content
except (AttributeError, IndexError):
generated_text = ""
return {
"raw": raw_response,
"result": generated_text,
"fallback_used": structured_prompt.get("fallback", False)
}
else:
return raw_response
# For local testing, you can uncomment the block below:
# if __name__ == "__main__":
# test_input = "Hello, tell me a little about yourself."
# result = process_request(test_input, model_name="openai", wrap_response=True)
# print("Final Output:", result)
# deepseek_provider.py
"""
Module: deepseek_provider.py
Responsibility:
- Adapter for DeepSeek.
- Implements generate_text(prompt, params) to call the DeepSeek API.
- Uses environment variable DEEPSEEK_API_KEY and a provider-specific API base.
Keywords: adapter, vendor-specific.
TODO: Integrate with DeepSeek's actual API if different from OpenAI's interface.
"""
import os
import openai # Assuming DeepSeek uses an OpenAI-compatible interface
def generate_text(prompt: str, params: dict) -> dict:
"""
Calls the DeepSeek API to generate text based on the prompt.
Args:
prompt (str): The input prompt.
params (dict): Parameters including:
- "engine": model identifier (default "deepseek-chat")
- "temperature": generation temperature (default 0.5)
- "max_tokens": maximum tokens (default 512)
- "api_base": DeepSeek endpoint (default "https://api.deepseek.com")
Returns:
dict: The API response.
"""
openai.api_key = os.getenv("DEEPSEEK_API_KEY")
openai.api_base = params.get("api_base", "https://api.deepseek.com")
messages = [{"role": "user", "content": prompt}]
response = openai.OpenAI().chat.completions.create(
model=params.get("engine", "deepseek-chat"),
messages=messages,
temperature=params.get("temperature", 0.5),
max_tokens=params.get("max_tokens", 512)
)
return response
# For local testing, you might add:
# if __name__ == "__main__":
# test_params = {"engine": "deepseek-chat", "temperature": 0.5, "max_tokens": 512}
# print(generate_text("Explain deep learning in simple terms.", test_params))
# huggingface_provider.py
"""
Module: huggingface_provider.py
Responsibility:
- Placeholder adapter for HuggingFace models.
- Implements generate_text(prompt, params) as a stub for future integration.
Keywords: adapter, extendability.
TODO: Integrate HuggingFace's API or local inference mechanisms.
"""
def generate_text(prompt: str, params: dict) -> dict:
"""
Simulates generating text using a HuggingFace model.
Args:
prompt (str): The input prompt.
params (dict): Parameters for text generation.
Returns:
dict: A simulated API response.
"""
# #Placeholder: Replace this stub with actual HuggingFace integration.
return {
"choices": [
{"message": {"content": "This is a placeholder response from HuggingFace adapter."}}
]
}
# For local testing, you might add:
# if __name__ == "__main__":
# test_params = {"engine": "mistral-7b", "temperature": 0.5, "max_tokens": 8192}
# print(generate_text("What is the capital of France?", test_params))
# openai_provider.py
"""
Module: openai_provider.py
Responsibility:
- Adapter for OpenAI.
- Implements generate_text(prompt, params) using OpenAI's API.
- Uses environment variable OPENAI_API_KEY and standard OpenAI endpoint.
Keywords: adapter, abstraction, OpenAI.
"""
import os
import openai
def generate_text(prompt: str, params: dict) -> dict:
"""
Calls the OpenAI API to generate text based on the prompt.
Args:
prompt (str): The input prompt.
params (dict): Parameters including:
- "engine": model identifier (default "gpt-3.5-turbo")
- "temperature": generation temperature (default 0.7)
- "max_tokens": maximum tokens (default 1000)
Returns:
dict: The API response.
"""
openai.api_key = os.getenv("OPENAI_API_KEY")
openai.api_base = "https://api.openai.com/v1"
messages = [{"role": "user", "content": prompt}]
response = openai.OpenAI().chat.completions.create(
model=params.get("engine", "gpt-3.5-turbo"),
messages=messages,
temperature=params.get("temperature", 0.7),
max_tokens=params.get("max_tokens", 1000)
)
return response
# For local testing, you might add:
# if __name__ == "__main__":
# test_params = {"engine": "gpt-3.5-turbo", "temperature": 0.7, "max_tokens": 1000}
# print(generate_text("Tell me a joke about programming.", test_params))
# request_builder.py
"""
Module: request_builder.py
Description:
Constructs the API request payload by converting a structured prompt into the format
expected by the backend API. This module separates the request construction from the
network call, enabling easier customization and extension.
Key Tasks:
- Build a basic payload (e.g., a list of messages for chat completions).
- Provide a clear placeholder for additional logic (e.g., inserting context or system messages).
Keywords: payload, construction, abstraction.
"""
def build_request_payload(structured_prompt: dict) -> dict:
"""
Constructs the request payload for the API call.
Args:
structured_prompt (dict): A dictionary containing at least a "prompt" key with the text.
Returns:
dict: A dictionary representing the payload to be sent to the API.
Example:
Input: {"prompt": "Tell me a joke about programming.", ...}
Output: {"messages": [{"role": "user", "content": "Tell me a joke about programming."}]}
#Placeholder: Extend with additional logic to incorporate context, system messages,
# or multi-turn conversation data as needed.
"""
payload = {
"messages": [
{"role": "user", "content": structured_prompt.get("prompt", "")}
]
}
# #Placeholder: Insert additional messages, context, or formatting as required in the future.
return payload
# For local testing, you can uncomment the block below:
# if __name__ == "__main__":
# test_prompt = {"prompt": "Tell me a joke about programming.", "fallback": False, "details": "Standard formatting applied."}
# print(build_request_payload(test_prompt))
# response_processor.py
"""
Module: response_processor.py
Description:
Parses and standardizes API responses.
Extracts the generated text from the API response and optionally converts response objects
to dictionaries if needed.
Key Tasks:
- Extract generated text.
- Optionally convert the response object to a dict (e.g., via .to_dict()) if the API supports it.
Keywords: extraction, standardization, formatting.
"""
def process_response(api_response) -> dict:
"""
Parses the raw API response and standardizes it.
Args:
api_response: The raw response object returned by the API call.
Returns:
dict: A standardized dictionary containing:
- "result": The generated text extracted from the response.
- "raw": The full raw API response.
#Placeholder: If needed, convert the api_response to a dictionary using .to_dict()
"""
try:
# Attempt to extract the generated text using attribute access.
generated_text = api_response.choices[0].message.content
except (AttributeError, IndexError) as e:
raise ValueError("Unexpected API response format: could not extract generated text.") from e
# Return a standardized dictionary.
return {
"result": generated_text,
"raw": api_response
}
# For local testing, you can uncomment the block below:
# if __name__ == "__main__":
# # Simulate a raw API response object (dummy example)
# class FakeMessage:
# def __init__(self, content):
# self.content = content
# class FakeChoice:
# def __init__(self, content):
# self.message = FakeMessage(content)
# class FakeResponse:
# def __init__(self, content):
# self.choices = [FakeChoice(content)]
# fake_response = FakeResponse("This is a sample generated text.")
# processed = process_response(fake_response)
# print(processed)
# retriever.py
"""
Module: retriever.py
Description:
Provides retrieval-augmented generation (RAG) capabilities.
For now, this module returns an empty context or dummy data.
In the future, integrate vector search or document retrieval to provide relevant external context.
Key Tasks:
- Initially return an empty context or placeholder data.
- Future: Replace with a real retrieval system (e.g., vector search, database query, etc.).
Keywords: retrieval, external context, RAG, pluggable.
"""
def retrieve_context(query: str) -> dict:
"""
Retrieves external context based on the query.
Args:
query (str): The user query for which to retrieve relevant external context.
Returns:
dict: A dictionary containing the retrieved context.
For now, returns an empty context.
#Placeholder: Integrate with a vector search or document retrieval system in the future.
"""
# Dummy implementation: return an empty context.
return {
"context": ""
}
# For local testing, you can uncomment the block below:
# if __name__ == "__main__":
# test_query = "What is the capital of France?"
# result = retrieve_context(test_query)
# print("Retrieved context:", result)
# standard_formatter.py
"""
Module: standard_formatter.py
Responsibility:
- Format input that has been validated as conforming to expected patterns.
- Convert the valid raw input into a structured prompt.
- This module is designed to be modular so that you can later extend its behavior
(e.g., adding punctuation, converting to a specific JSON structure, etc.)
"""
def standard_format_input(raw_input: str) -> dict:
"""
Processes the valid input into a structured prompt.
Args:
raw_input (str): The raw user input that has been validated.
Returns:
dict: A dictionary containing:
- "prompt": the formatted text,
- "fallback": False (indicating no fallback was applied),
- "details": a note describing that standard formatting was used.
#Placeholder: Extend with additional formatting logic if needed (e.g., punctuation, capitalization).
"""
# Minimal formatting: trim whitespace.
formatted = raw_input.strip()
return {
"prompt": formatted,
"fallback": False,
"details": "Standard formatting applied. #Placeholder for extended formatting logic."
}
# For local testing, you can uncomment the block below:
# if __name__ == "__main__":
# test_input = "Hello, how are you doing?"
# result = standard_format_input(test_input)
# print(result)