Colorado Artificial Intelligence Act
Kids on the Yard's compliance with the Colorado AI Act regarding high-risk AI systems
Colorado Artificial Intelligence Act
Effective Date: June 30, 2026
Overview
The Colorado Artificial Intelligence Act establishes requirements for developers and deployers of high-risk artificial intelligence systems. Kids on the Yard is committed to responsible AI use and compliance with this law.
Applicability to Kids on the Yard
Kids on the Yard is operated by Limitless Virtue LLC, a Florida limited liability company headquartered in Miami Shores, Florida. Based on the applicability criteria of the Colorado Artificial Intelligence Act (SB 24-205) described above, Kids on the Yard does not currently meet the criteria for direct application of this law, as Kids on the Yard qualifies for the small-deployer exemption under C.R.S. 6-1-1703(6).
Although Kids on the Yard is not directly subject to the Colorado Artificial Intelligence Act (SB 24-205) as a matter of statutory obligation, we have voluntarily adopted the rights, disclosures, and practices described on this page as a matter of best practice and out of respect for the privacy expectations of our families. The rights described above are honored by Kids on the Yard regardless of statutory applicability, through the contact procedures listed below.
If Kids on the Yard's circumstances change such that the Colorado Artificial Intelligence Act (SB 24-205) does directly apply, the obligations described here will become legally binding rather than voluntary, and this section will be updated accordingly. Nothing in this voluntary framework limits or waives any rights you may have under the Colorado Artificial Intelligence Act (SB 24-205) in the event the law does directly apply to Kids on the Yard.
Applicability
What Constitutes High-Risk AI
Under the Colorado AI Act, high-risk AI systems are those that make or are a substantial factor in making consequential decisions regarding:
- Education and educational opportunities
- Employment and employment opportunities
- Financial services and lending
- Essential services access
- Healthcare services
- Housing opportunities
- Legal services
Our AI Usage
Kids on the Yard may use AI in ways that could be considered high-risk under this law, including:
- Educational placement recommendations
- Tutor-student matching
- Learning path personalization
- Assessment and progress evaluation
Our Obligations as AI Deployer
1. Risk Management
We implement a risk management policy that:
- Identifies potential algorithmic discrimination risks
- Implements safeguards against discriminatory outcomes
- Regularly monitors AI system performance
- Documents risk management decisions
2. Impact Assessments
Before deploying high-risk AI systems, we conduct impact assessments evaluating:
- Purpose and intended use of the AI system
- Types of data processed
- Known limitations and risks
- Potential for discriminatory impact
- Safeguards implemented
3. Human Oversight
All high-risk AI decisions at Kids on the Yard include:
- Human review of consequential decisions
- Ability to override AI recommendations
- Training for staff on AI limitations
- Clear accountability for final decisions
4. Transparency
We provide transparency about our AI use:
- Disclosure when AI is used in consequential decisions
- Explanation of how AI influences decisions
- Information about human oversight processes
- Contact information for questions
Consumer Rights
Right to Notice
Colorado consumers have the right to know when AI is used to make or substantially influence consequential decisions affecting them.
Our Disclosure:
- We inform users when AI influences educational recommendations
- Disclosures are clear and accessible
- Additional information available upon request
Right to Explanation
When AI is a substantial factor in a consequential decision, you may request:
- Statement that AI was used
- Description of how AI influenced the decision
- Information about the type of AI system
- Contact for questions or concerns
Right to Human Review
You may request:
- Human review of AI-influenced decisions
- Explanation of the human review process
- Opportunity to provide additional information
- Correction of errors identified
Right to Appeal
If you believe an AI-influenced decision was incorrect:
- Request human review
- Provide additional context or information
- Receive explanation of appeal decision
- Further escalation options available
Algorithmic Discrimination Prevention
Our Commitment
Kids on the Yard is committed to preventing algorithmic discrimination based on:
- Race or ethnicity
- Color
- National origin
- Sex or gender
- Sexual orientation
- Religion
- Age
- Disability
- Veteran status
Safeguards Implemented
- Regular bias audits of AI systems
- Diverse training data requirements
- Fairness metrics monitoring
- Third-party assessments when appropriate
- Prompt remediation of identified issues
Documentation and Records
What We Maintain
- AI system documentation and specifications
- Risk assessments and impact assessments
- Bias audit results
- Consumer complaints and resolutions
- Training and oversight records
Retention Period
Documentation retained for at least 3 years after AI system is discontinued or as required by law.
Reporting and Compliance
Annual Review
We conduct annual reviews of:
- AI system performance
- Discrimination testing results
- Consumer feedback and complaints
- Compliance with this policy
Incident Response
If discriminatory outcomes are identified:
- Immediate investigation
- System suspension if necessary
- Remediation implementation
- Affected consumer notification
- Documentation of resolution
Contact Information
For questions about AI use or to exercise your rights:
AI Governance Team Kids on the Yard Limitless Virtue LLC 9701 NE 2nd Ave, Suite #1069 Miami Shores, Florida 33138 U.S.A
| Contact | Information |
|---|---|
| AI Questions | [email protected] |
| Privacy Team | [email protected] |
| General | [email protected] |
| Phone | +1 786-382-2000 |
Related Policies
Last Updated: January 1, 2026
Tags
Need Help?
If you have questions about this policy or need assistance, please contact our support team.
Contact Support →