Microcosm’s AI Policy (tl;dr: “No, thank you”)
We were recently asked by a sales rep to make a stronger statement about AI. People don’t usually have to ask us to be bolder, and we were very happy to comply.
Below is where we find ourselves as of April, 2026. Going forward, we’ll be keeping our policy document up to date in a more boring format.

Why Do Creatives Use AI?
When we asked if there is a correlation between AI use and addiction, Dr. Faith Harper pointed out something that we hadn’t considered about why people use AI:
“The discomfort of an imperfect process is what people are trying to avoid. But the discomfort is the most important part of our growth as creatives. We’re not only not thinking, we’re not maturing as people.”
She pointed us to an interview with David Bowie, where he explains:
“Always remember that the reason you started working was that there was something inside of yourself that you felt that if you could manifest it in some way, you would understand more about yourself and how you coexist with the rest of society… If you feel safe in the area that you are working in, you’re not working in the right area. Always go a little further into the water than you feel you are capable of being in . . . when you don’t feel that your feet are quite touching the bottom, you’re just about in the right place to do something exciting.”
We think this gets to the heart of why society has been so quick to believe in AI’s promises. New tech is often transformative, but not always in a positive way, and there are many unexplored questions in this particular technology.
We saw these same human behaviors for decades before AI. Our advice to all creative folks: Lean into the discomfort of imperfection. If there’s something about your work that you think is too rough or isn’t coming out the way you hoped, make a note of it to bring up with your editor. Discomfort is a sign that there’s probably gold to be found there, but AI will just bury it further instead of bringing it to the front.
Why We Don’t Publish Work that Uses AI
Because of our values and humanity, we do our best not to publish any content produced or edited using generative AI. This includes our published works, marketing materials, company documents, social media posts, emails… everything we produce and communicate.
The reason is multifold:
- Our mission and reputation depend on publishing work that speaks compellingly to readers and provides practical, life-saving tools. Generative AI has a distinct voice that is, frankly, insufferable. AI is very bad at producing the kind of text that AI says is credible. AI looks for sources that are specific, contextual, clear, consistent, and well-cited from experts and reputable sources. Yet AI writes in generalities, platitudes, and misinformation. AI focuses on probabilisitic plausibility over veracity. It’s as confident as it is incorrect. If you wrote that way yourself, we would need the same significant edits to make it publishable.
- Copyright. The ownership of AI-generated or even AI-assisted work is a deep legal grey area.
- Efficiency. When you use AI, it may seem like a great time-saver, but it creates more work downstream, according to this Harvard study and makes you worse at critical thinking, according to this MIT study. Our team is downstream on this—AI may feel more efficient for an author, but untangling its results is a massive time suck for us.
- Environment. It’s hard to both know the devastating environmental impact of this tech and still want to use it. Due to increasing costs for local communities, 26 data center projects have been defeated by local activists to date.
- Future planning. Economists are estimating that AI companies will need to grow by 1000% per year in order to sustain costs of production. This means they will need to dramatically increase their costs and reduce the quality of service. Even if we could make this technology work for us, it doesn’t seem like a safe bet to rely on it in the future.
Our policies and thinking are evolving along with our understanding of AI as well as available technology, common usages, emerging news, and issues. We’re all learning together here.

Marketing
We don’t use AI-generated anything in any of our marketing. Authors and publishers can talk more compellingly about their books than robots, so we let the humans shine.
What’s more, there is no productivity or efficiency benefit to using AI. AI companies want to create the illusion that everyone is using AI for everything, and also want to create the illusion that it actually works. Neither is true. We aren’t using it, and in its current state it doesn’t make sense to do so!
Our marketing team works alongside our sales and editorial departments to develop all of Microcosm’s titles as individual projects, as products within a particular season, and as components of our larger list and mission. Our marketing initiatives are informed by our experience both inside and outside the book industry, as well as our highly detailed data from the proprietary software that links every part of Microcosm’s organization. A bot created in Silicon Valley by people whose focus is getting rich simply does not have the expertise and nuance that our human workers do.
You may have picked up on this from our site or other communications with us, but here’s the thing: Microcosm is a unique publisher because we specialize in timely, niche, weird, and otherwise hard-to-find, passion-driven materials. Because we’re creating tools to save lives and change worlds, we are the anti-vanity press. We select the things we want to publish because there’s nothing else like them. That means that technology built on what’s already out there cannot adequately support the work we publish. Likewise, we’ve built our organization to be flexible so we can make choices according to what we see in our data, what we learn from our customers, and what people really need in this crazy, ever-changing world we’re living through together. That means that technology offering you solutions for publishing in general is not designed to support publishing with Microcosm.
Can I use AI to fill out my author intake form?
Your author intake form is very important for our marketing people—please do not use AI to fill it out. It works best when your totally unique perspective, background, inspirations, goals, and taste inform your answers. This document helps us with every step of the publication process after you turn it in! That includes your jacket copy, book title, cover design, marketing plan, publicity outreach, and beyond. We want this process to be a special potion that can only be made by you and us combining our particular skills and perspectives. We can’t do that if this essential ingredient is artificial! If you have questions or if you get stuck, please ask us instead, and we can sort it out together.

Editorial FAQ
How does Microcosm vet manuscripts for AI?
We vet all text and art submitted using at least one AI-detection app. Our best practice screening tools are currently Pangram for text and SightEngine for art. Our editors are trained on recognizing common signs of AI (and boring manuscripts) and scrutinize every work for these regardless of app results.
We cannot publish any work that can be determined by our editors, a casual reader, and/or a software screening program to contain AI-generated text.
We ask our authors to disclose any AI use that could impact their work. This includes the use of generative LLMs such as Chat GPT, Claude, Gemini, and others at any part of the process, as well as use of tools such as Grammarly, Perplexity, Google’s suggested text features, or other products that make use of AI in order to alter human-generated text.
In the event that a manuscript fails repeated AI checks, we maintain the publishing rights but do not publish the book because the “author” didn’t write it.
What AI uses are ok and not ok?
This is an area where we are still learning. We have some very clear ideas about what is and isn’t acceptable in work we publish.
Not acceptable:
- It is not ok to enter prompts into an AI app and ask it to generate text. It is still not ok if you trained your AI agent on your own writing. And it is still not ok even if you edited the results significantly after the AI generated it.
- It is not ok to put text you originally wrote through an AI app and ask it to make edits. Not even spelling and grammar edits. Not reading level. Not continuity, and not fact checking. Not adding citations. These apps will do far more than you ask them to.
- It is not ok to consult AI for advice on the phrasing, style, structure, or tone of your piece—anything that might influence your voice and creative choices.
- It is not ok to use AI for anything involving images.
Possibly acceptable but we still need to know:
- It might be ok to use AI for formatting tables, citations, or other messy, non-prose data. We honestly don’t know how many liberties your app of choice will take; we strongly encourage you not to use AI, but if you do use it, save a pre-AI version, carefully check the results, and tell us what you did so we can compare. Then we’ll update this policy based on what we find!
- It might be ok to use AI to convert handwriting or PDFs (for instance, hundreds of pages of your old cut-and-paste zines with a ton of different fonts and angles) to text. Again, check the results with great care and please disclose to us that you did this and give us both versions to compare.
- It might be ok to ask an AI agent to look at your work and give you a checklist of issues to work through yourself. For instance, an author might ask an AI agent to produce a list of problems with continuity of character nicknames or to flag overused words or phrases. But if the AI agent has specific advice about wording or structure, we recommend against taking it, as its voice can influence yours without you realizing it. Again, if you use AI for this, please disclose it and submit both drafts.
I only used AI to clean up / smooth things over / catch typos. Why is that a problem?
- AI takes liberties. You might have only asked it to do a quick proofread, but it often goes beyond what you asked it to do and will change words, phrasing, or in some cases add entire sections of text or even new chapters.
- Editing is your editor’s job. We are good at it. Let us do it. Before AI, people sometimes would get caught up in perfectionism and hire an outside editor to polish their rough draft before submitting it; now people use AI sometimes for a similar reason. It also just takes longer for us to edit text that’s overly polished because our brains tell us that it’s “done”—but it’s not interesting to read.
- Copy editing too soon in the editorial process is pointless, since our developmental editors will be asking for substantial revisions.
- Readers are concerned about AI, and any remaining imperfections in the final text help assure them that they are reading the work of a human.
I have a disability that requires me to use AI
Please speak with your editor about how we can support your needs while also producing work that we are able to publish.
You found AI in my work but I disagree
Sometimes when we notice signs of AI’s voice in an author’s work, when we let them know they say they have not used it. Some common responses:
Your app sucks / It must be a false positive
We use Pangram to check text – it’s currently the best in class software for this purpose. False positives in Pangram are extremely low – less than one half of 1% for the type of work we publish. You can read their evaluation of false positives and negatives here.
That said, we never simply take Pangram’s word for it. We are mostly focused on voice and quality. We use Pangram as an initial screening tool and then put our human brains to the task of trying to determine what’s actually going on. If something appears to be entirely generated from prompts, we’ll send you back to the drawing board; otherwise we’ll give you specific editorial feedback about what we are looking for.
If we can’t figure out why Pangram is flagging something, we trust our editors’ human brains beyond the app. After all, the goal isn’t to eradicate all robots, it’s to publish amazing work that meets our style guides and will help readers.
I didn’t use an LLM, but I think another app I used may have sneakily incorporated AI into my work
Yes, this absolutely happens and this absolutely sucks. Almost all enterprise apps, including MS Word, Google suite, and Grammarly, among many others that you might regularly use for writing and communication, are starting to incorporate features that prompt you to replace your work with their AI-suggested work. This may be unintentional on your part, but it is still resulting in output that is not your own.
Our best advice is to go into the settings of any app you regularly use and turn off any AI features; if that’s not possible, it may be time to find a new app.
I didn’t use an LLM, the problem is that my work was used to train LLMs so they are writing in my voice.
If you have books in print or have writing on the internet, your work probably was stolen to train LLMs. But then the LLMs are post-trained to have a very narrow range of specific voices of their own, and those specific robot voices are the ones we are not interested in publishing—and that detection apps are designed to look for.
It’s just because I use em-dashes / I use specific words that AI also likes to use
Send us a version of your manuscript without these elements and we can take another look.
That is my unique voice
Friend, we know you can do better.

For perspective: Only a very small percentage of the creative work that comes across our desks tests positive for AI, though the workload that has resulted for us from it has been massive. We’ve been on a journey this year, spending many more hours than we want to learning about AI, how people use it, and how people think and talk about it. We like to learn, but good grief, y’all. There are some giant companies pouring HUGE amounts of money into trying to convince us that AI is the new normal. It’s not and we don’t believe it will be. We appreciate you reading something that empowers you to think critically against that narrative.