We believe technology should amplify human values — not compromise them. Here’s our approach to using AI responsibly.
Our Core Principles #
- Respect for Creators: We will safeguard the rights of artists, writers and other creative professionals.
- Transparency: We will share our decision-making processes openly, in plain language.
- Fairness: We will design systems that avoid harm to marginalized communities.
- Safety: We will work proactively to mitigate risks of misuse or unintended consequences.
How We Use AI: 3 Key Categories #
Machine Learning (Personalized Matching) #
Here is an example of use: Our recommendation engine matches you to causes based on your preferences. Here is what we are doing to make our use more ethical:
- Bias Mitigation: We will build our recommendation systems to prioritize empathy over engagement metrics.
- Data Integrity: We will prioritize diverse, consent-driven datasets over generative outputs to avoid inaccuracies.
Generative AI (Creative Tools) #
Here is our stance: Training on copyrighted works without permission undermines creators’ rights. We say “no” to profiting from these tools and we say “no” to supporting people who profit from these tools. That said, we think that where fair use generally applies it is acceptable to use such models with some caveats:
- Brainstorming: We may use some forms of generative AI for Internal brainstorming (e.g., vision boards for collaborating artists) early on in our development but, as we grow, we will abandon these tools for a team of highly qualified humans, including concept artists and concept designers. At the time of the initial drafting of this policy we do not have funding to hire humans for concept work. That will change.
- No Plagarism: We will not use the raw generated output - including images, voice, or text - in products, marketing, or web content.
- Model Accuracy: Not only are there copyright concerns, but there are accuracy concerns on the outputs. Frankly, we don’t trust it.
Assistive AI (Empowerment Tools) #
AI tools can make great assistive devices: Our founder Raymond uses AI to enhance communication as an autistic adult, demonstrating its inclusive potential. We support:
- Accessibility: We support tools that level the playing field for neurodivergent users. Features found in generative textual models like tone detection, article summarization, and understanding intent of the speaker help autistic people bridge the gap in their social capabilities with their neurotypical peers.
- Advocacy: We will continue to push AI model makers and developers to prioritize ethical training data and consent frameworks.
Pledges #
-
No Exploitation of Creative Labor: We will not deploy AI-generated art, music, or text in our products. We will respect and safeguard the rights and intellectual property of artists, writers, journalists, developers, and other human contributors.
-
Human-First Workflows: We use AI like we use search engines. As a research and discovery tool. We wouldn’t plagerize from a news article, or a scientific journal and we won’t plagerize from an AI model either.
-
Open Source Prioritization: We will prioritize open-source tools and open models trained on publicly available data over proprietary and opaque models which are often trained on copyrighted works.
-
Fair Use Advocacy: Supporting legal frameworks that enable non-profits and educators to use AI creatively without infringing rights.
Why This Matters #
AI’s potential to harm or help hinges on intent. By centering ethics, we’re proving transformative technology can coexist with integrity. Whether it’s matching donors to causes or empowering neurodivergent communication, our approach ensures progress aligns with humanity—not at its expense.
Your insights are vital. Please share your feedback on our approach. Email us at wonderandcode@gmail.com.