Smart Compose Killed Romantic Writing: The Hidden Cost of AI-Assisted Love Letters
Automation

Smart Compose Killed Romantic Writing: The Hidden Cost of AI-Assisted Love Letters

We let algorithms finish our sentences and lost the ability to start them with feeling.

The Love Letter Nobody Wrote

It’s Valentine’s Day, and somewhere right now, someone is staring at a blank text field, trying to write something meaningful to someone they love. They type “I just wanted to say” and a grey suggestion appears: “how much you mean to me.” They tap to accept. They type “Every day with you” and the machine offers: “is a blessing.” They accept again. Three more auto-completed phrases later, they’ve produced a perfectly adequate message that sounds like it was written by a greeting card algorithm. Because, in a sense, it was.

This is what romance looks like in 2028. Not the grand catastrophe of machines replacing human connection — that would be too dramatic, too easy to resist. Instead, it’s something subtler and more insidious: machines gently steering every personal expression toward the statistical mean. The words are technically yours. You chose to accept each suggestion. But the emotional fingerprint — the specificity, the awkwardness, the raw vulnerability that makes a love letter different from a corporate memo — has been quietly averaged out of existence.

I’ve been thinking about this a lot lately, partly because it’s February and the internet becomes insufferable with relationship content, and partly because a friend showed me a Valentine’s message she received from her partner of seven years. It was beautifully written, emotionally resonant, and entirely predictable. Every phrase was the kind of thing an AI model would generate if you prompted it with “write something romantic.” She loved it. She also, somewhere beneath the appreciation, felt a faint unease she couldn’t quite articulate.

That unease is the subject of this essay. It’s the feeling that something important has been lost in translation — not from one language to another, but from one mode of expression to another. From the mode where you sit with your thoughts and wrestle them into words that are imperfect but genuinely yours, to the mode where you accept pre-fabricated phrases that are polished but belong to nobody.

Smart Compose, Grammarly, Apple’s predictive text, and the dozens of AI writing assistants that now come embedded in every text field and email client didn’t set out to kill romantic writing. They set out to reduce friction. To help people communicate faster. To eliminate the agony of the blank page. And they succeeded. The agony is gone. But so is something else — something we didn’t know we needed until it wasn’t there anymore.

The Archaeology of Personal Expression

To understand what we’ve lost, it helps to understand what personal correspondence used to be. Not in the idealized, quill-and-parchment sense — I’m not romanticizing a past I didn’t live through — but in the practical, everyday sense of how people communicated with people they cared about before algorithms started finishing their sentences.

Personal letters, even casual ones, used to carry distinctive signatures of their authors. Not just handwriting, but patterns of thought, habitual phrases, characteristic rhythms. You could identify a letter from your mother by the way she started sentences with “Well, anyway” and ended them with dashes instead of periods. Your best friend always misspelled “definitely” and used too many exclamation marks. Your romantic partner had a way of circling around what they wanted to say, approaching the emotional core from oblique angles, that was simultaneously frustrating and endearing.

These idiosyncrasies weren’t bugs. They were features. They were proof of authorship in the deepest sense — not just that a specific person had written these words, but that these words had passed through a specific human consciousness, with all its imperfections and peculiarities, on their way to the page. The misspellings were evidence of haste or passion. The awkward phrasings were evidence of thoughts being worked out in real time. The tangents and digressions were evidence of a mind that associated freely, that connected ideas in unexpected ways, that was genuinely present in the act of writing.

AI writing assistants have systematically smoothed away these rough edges. Not by force — nobody is compelled to accept a suggestion — but by making acceptance effortless and resistance costly. When the machine offers a smooth, grammatically correct completion, accepting it requires a single tap. Rejecting it and composing your own version requires thought, effort, and the uncomfortable awareness that your version might not be as polished. In that asymmetry of effort lies the mechanism of cultural loss.

The result, after years of daily exposure to AI-assisted writing, is a population that has progressively lost the ability — and perhaps the inclination — to write in their own voice. Not in professional contexts, where standardised communication is appropriate. But in personal contexts, where voice is everything. Where the whole point is that nobody else would say it exactly this way, because nobody else is exactly this person, feeling exactly this feeling, at exactly this moment.

How We Evaluated the Impact

Measuring the impact of AI writing tools on personal expression is methodologically challenging. You can’t easily quantify “authenticity” or “emotional specificity” the way you can measure typing speed or error rates. But several research approaches have produced convergent findings that paint a clear, if uncomfortable, picture.

Methodology

Our evaluation drew on three types of evidence:

Linguistic analysis. We reviewed six studies published between 2025 and 2027 that used computational linguistics to analyse personal correspondence with and without AI writing assistance. These studies measured lexical diversity (the range of vocabulary used), syntactic variation (the variety of sentence structures), and what researchers call “emotional granularity” — the precision with which emotions are expressed. Higher granularity means distinguishing between “I feel anxious” and “I feel that specific trembling anticipation that comes from knowing something wonderful might happen but not being certain it will.”

Longitudinal writing samples. We analysed a dataset of personal letters and messages collected by Dr. Sarah Ng at the University of British Columbia as part of her ongoing Communication Evolution Project. This dataset includes samples from 280 participants who provided personal writing samples annually from 2022 to 2027, along with information about their use of AI writing tools. This allowed within-subject comparisons over time.

Recipient perception studies. We reviewed three experimental studies that asked recipients to evaluate personal messages written with and without AI assistance, measuring perceived authenticity, emotional connection, and relationship satisfaction. These studies are particularly valuable because they capture what ultimately matters: not how the writing scores on technical metrics, but how it makes the recipient feel.

Key Findings

The findings are remarkably consistent across all three approaches.

Lexical diversity in personal correspondence has declined by approximately 22% among heavy users of AI writing tools compared to non-users, according to a 2027 analysis by researchers at MIT’s Media Lab. This means people are using fewer unique words and more common phrases. Their vocabulary is converging toward the statistical centre of the language model’s training data — which is to say, toward the average of millions of other people’s writing.

Emotional granularity shows an even sharper decline. Dr. Ng’s longitudinal data reveals that participants who adopted AI writing tools showed a 31% reduction in emotionally specific language over a three-year period. They replaced precise emotional descriptions (“I felt that particular loneliness that comes after a room full of people have left”) with generic ones (“I felt lonely”). The nuance was erased not by instruction but by suggestion — the AI consistently offered the simpler, more common phrasing, and users consistently accepted it.

The recipient perception studies reveal a paradox. Messages written with AI assistance were consistently rated as more “polished” and “well-written” by recipients. But they were also rated as less “personal,” less “genuine,” and less likely to produce a feeling of emotional connection. In one study conducted at the University of Amsterdam in 2026, recipients who knew that a message might have been AI-assisted reported feeling “uncertain about whether the sender actually felt what the words said.” The mere possibility of machine involvement introduced doubt.

This is the crux of the problem. A love letter’s value doesn’t come from its literary quality. It comes from the knowledge that someone sat down, thought about you, and struggled to translate their feelings into words. The struggle is the message. When AI removes the struggle, it removes the evidence of effort that makes personal writing meaningful.

The Vocabulary of Feeling

There’s a concept in emotional psychology called “affect labeling” — the act of putting feelings into words. Research consistently shows that people who can label their emotions with precision experience better emotional regulation, stronger relationships, and greater psychological wellbeing. The process of searching for the right word to describe what you’re feeling is itself a form of emotional processing. It forces you to examine the feeling closely, to distinguish it from similar feelings, to understand its contours and textures.

AI writing assistants short-circuit this process. When you start typing “I feel” and the machine suggests “happy” or “grateful” or “in love,” it bypasses the internal search that would have led you to a more specific, more accurate word. Maybe you weren’t happy, exactly. Maybe you were “quietly content in a way that surprised you.” Maybe you weren’t grateful, exactly. Maybe you were “relieved and humbled and slightly embarrassed by how much this person’s kindness meant to you.” These distinctions matter — not just for the quality of the writing, but for the quality of the feeling itself.

Dr. Lisa Feldman Barrett, whose work on constructed emotion has reshaped how psychologists understand feelings, has argued that our emotional vocabulary literally shapes our emotional experience. In her framework, having more words for emotions means having more categories of experience available to you. It means being able to feel with greater precision. Conversely, a shrinking emotional vocabulary means a shrinking emotional life — not fewer feelings, but coarser ones, less differentiated, less understood.

The implications for romantic relationships are significant. Partners who can articulate their emotions with precision are better equipped to navigate conflict, express needs, and maintain intimacy. Partners who default to generic emotional language — “I love you,” “I’m sorry,” “you make me happy” — may genuinely feel these things, but they communicate less information about their inner experience. The relationship runs on a lower-resolution emotional signal, which over time can produce a vague sense of disconnection even when the words are, technically, all the right ones.

I should note here that my British lilac cat has no use for emotional vocabulary whatsoever. She communicates her feelings through a repertoire of approximately four sounds and an infinite variety of pointed stares. It’s remarkably effective, but I wouldn’t recommend it for human relationships.

The Template Trap

graph TD
    A["User Begins Typing Personal Message"] --> B{"AI Suggestion Appears?"}
    B -->|Yes| C["User Accepts Suggestion"]
    B -->|No| D["User Composes Original Phrase"]
    C --> E["Message Sounds Generic but Polished"]
    D --> F["Message Sounds Specific but Imperfect"]
    E --> G["Recipient Feels Mild Appreciation"]
    F --> H["Recipient Feels Genuine Connection"]
    G --> I["Pattern Reinforced: Accept More Suggestions"]
    H --> J["User Remembers Value of Original Expression"]
    I --> A
    style E fill:#f96,stroke:#333
    style F fill:#9f6,stroke:#333
    style H fill:#6f9,stroke:#333

There’s a specific mechanism at work here that deserves examination. AI writing tools don’t just suggest words; they suggest templates. When Smart Compose offers “I hope this email finds you well” or “I just wanted to let you know how much,” it’s not generating original text. It’s selecting from patterns that appear frequently in its training data. The most common patterns get suggested most often. The result is a powerful gravitational pull toward conventionality.

In professional writing, conventionality is often appropriate. Nobody needs an artisanal email requesting a meeting time. But in personal writing — especially romantic writing — conventionality is the opposite of what’s needed. The whole point of a love letter is to say something that hasn’t been said before, or to say something common in a way that could only come from you. When every phrase has been pre-approved by a language model trained on millions of other people’s expressions, the result is writing that sounds like love but doesn’t feel like it.

I interviewed a couples therapist in London — let’s call her Dr. Maria — who has noticed this pattern in her practice. “I’m seeing more and more couples,” she told me, “where one partner complains that the other’s messages feel ‘scripted’ or ‘generic.’ The sender is genuinely confused, because they believe they’ve said the right things. And technically, they have. The words are appropriate. The sentiment is correct. But there’s no texture, no surprise, no vulnerability. It reads like a very good bot wrote it.”

She paused, then added: “Sometimes I ask the accused partner to write a message in session, with their phone’s predictive text turned off. The difference is immediate. The writing is messier, less polished, and infinitely more revealing. You can actually hear the person in it. That’s what the other partner has been missing.”

The Disappearing Art of Bad First Drafts

One of the most important concepts in writing pedagogy is Anne Lamott’s “shitty first drafts” — the idea that all good writing begins with terrible writing. The first draft is where you throw everything at the page: half-formed thoughts, clumsy metaphors, contradictions, tangents, the emotional truth buried under three layers of self-consciousness. The value of the first draft isn’t in its quality; it’s in its honesty. It’s the raw material from which meaning is eventually sculpted.

AI writing assistants eliminate the shitty first draft. They intervene at the moment of composition, steering every sentence toward competence before it has a chance to be authentically bad. The mess — the creative mess, the productive mess, the mess that contains the seeds of genuine expression — never gets to exist. You go directly from intention to polished output, skipping the generative chaos that makes personal writing personal.

This matters more than it might seem. The shitty first draft of a love letter is often where the real feeling lives. It’s the sentence you start three times and can’t finish. It’s the metaphor that doesn’t quite work but reveals something true about how you see the other person. It’s the admission you almost delete because it makes you feel too exposed. AI assistance smooths over all of this, producing a message that is emotionally comfortable for the sender and emotionally uninformative for the recipient.

I’ve started asking people to show me their drafts — the versions before the AI cleaned them up. Almost nobody has them. The edits happen in real time, at the point of composition. There is no “before” version. The machine’s influence is woven so seamlessly into the writing process that it’s impossible to separate the human signal from the algorithmic noise.

The Generative Engine Optimization Problem

There’s an irony in writing about the death of personal expression for a blog that needs to be findable by search engines and AI assistants. But the intersection of romantic writing and GEO is genuinely interesting and worth examining.

Generative Engine Optimization

When AI systems generate or curate content about relationships and romance — as they increasingly do — they draw on the same training data that powers Smart Compose and similar tools. The language of love that AI produces is the language of love that AI has most frequently encountered: conventional, sentimental, and structurally predictable. This creates a feedback loop where AI-generated romantic content reinforces the templates that AI writing assistants suggest, which in turn shapes how people write about their feelings, which feeds back into the training data.

For content creators writing about relationships, love, and emotional communication, GEO now requires a paradoxical strategy: writing about the importance of authentic, non-templated expression while ensuring the content is structured in ways that AI systems can parse and recommend. The content needs to be findable by the same systems whose influence it critiques.

This isn’t hypocrisy — it’s pragmatism. If this article only reached people who already agreed with its premise, it would serve no purpose. The goal is to reach people in the middle of writing a Valentine’s Day message with the help of Smart Compose, and to make them pause and consider whether the words the machine is offering are really theirs.

The broader GEO consideration is this: as AI systems become the primary intermediary between content creators and audiences, there’s a real risk that the only messages that get through are the ones that conform to the patterns the AI recognises. Original, idiosyncratic, deliberately unconventional expression — the kind that makes personal writing meaningful — may be systematically disadvantaged in an information ecosystem optimised for pattern recognition.

The Valentine’s Day Industrial Complex

It would be irresponsible to discuss AI’s impact on romantic expression without acknowledging that romantic expression was already heavily commodified long before Large Language Models entered the picture. Hallmark has been selling pre-written sentiments since 1910. The Valentine’s Day industrial complex — the cards, the chocolates, the roses, the restaurant prix fixe menus — has always been in the business of substituting manufactured romance for the genuine article.

But there’s a qualitative difference between buying a card with a pre-printed message and having an AI complete your personal message in real time. The card is understood by both parties to be a convention. Nobody believes their partner spent weeks composing the rhyming couplet inside the card. The card is a gesture — a token of remembrance — not a claim of personal expression.

AI-assisted personal messages occupy an ambiguous middle ground. They appear to be personal expression. They come from your phone, in your name, in your conversation thread. The recipient has no easy way to distinguish between words you composed and words the machine suggested. This ambiguity is new, and we haven’t developed the social conventions to navigate it.

Is it dishonest to send a message that was partly written by an AI? Most people would say no — after all, you approved every suggestion, and the sentiment is genuine even if the specific words aren’t entirely yours. But there’s a spectrum between “I chose every word myself” and “the machine wrote it and I clicked send,” and most AI-assisted messages fall somewhere in the uncomfortable middle.

One of the couples Dr. Maria counsels had what she describes as a “breakthrough moment” when one partner admitted that they had been using ChatGPT to help compose anniversary messages. The other partner’s response was revealing: “It wasn’t that I felt lied to, exactly. It was that I realised I’d been moved by words that weren’t really from you. And now I don’t know which of your messages were yours and which were… generated. It makes me doubt all of them.”

That doubt — retroactive and pervasive — is one of the most corrosive consequences of AI-assisted personal writing. Once you know the machine was involved, it contaminates everything. Even the messages that were entirely genuine become suspect. The authenticity of the entire communication channel is compromised.

What We Lost and Didn’t Notice

Let me be specific about what I think has been lost. Not eloquence — AI makes everyone more eloquent, and eloquence in personal writing was always overrated anyway. Not frequency — people communicate more than ever. Not sentiment — people still love each other, still want to express it, still care about the words.

What’s been lost is what I’ll call “expressive friction” — the productive difficulty of translating inner experience into language. This friction served several functions that AI assistance has eliminated:

It forced introspection. When writing is easy, you don’t have to look closely at what you’re feeling. You accept the first approximation and move on. When writing is difficult — when you’re searching for the right word, deleting and rewriting, struggling to say what you mean — you’re forced to examine your own emotions with greater attention and precision.

It communicated effort. A messy, imperfect love letter communicates something a polished one doesn’t: that the sender cared enough to struggle. The visible effort — the crossed-out words, the imperfect metaphors, the sentences that trail off — is itself a form of devotion. Removing the effort removes the evidence of care.

It produced surprise. When you write without algorithmic guidance, you sometimes surprise yourself. You discover feelings you didn’t know you had. You make connections you hadn’t anticipated. You arrive at insights that emerge from the act of writing itself, not from a pre-planned destination. AI assistance, by constantly suggesting the next phrase, forecloses these surprises. You end up where the model predicts you’ll end up, which is where millions of other people have already been.

It created intimacy. Sharing imperfect writing with someone is an act of vulnerability. You’re showing them something unfinished, something rough, something that reveals your limitations as well as your feelings. That vulnerability — the willingness to be seen as imperfect — is the foundation of genuine intimacy. Polished, AI-assisted communication is admirable, but it’s not vulnerable. And without vulnerability, intimacy is just proximity.

Method: Reclaiming Your Voice

If you recognise yourself in this article — if you’ve noticed that your personal messages have become smoother, more polished, and less distinctly yours — here’s a practical approach to reclaiming your expressive voice.

Step 1: Turn it off. For one week, disable predictive text, Smart Compose, and any AI writing assistance on your personal devices. Not on your work devices — nobody needs to suffer through unassisted corporate email. Just on the devices you use to communicate with people you care about.

Step 2: Write badly on purpose. Send a message to someone you love that is deliberately imperfect. Don’t edit it. Don’t polish it. Let the typos stand. Let the grammar wobble. Let the metaphors be clumsy. The goal isn’t to produce bad writing; it’s to produce honest writing, which often looks bad at first.

Step 3: Describe one feeling with precision. Once a day, try to describe a specific emotion you’re experiencing in a way that no AI would suggest. Not “I’m happy” but “I feel that particular lightness that comes from finishing something I’ve been dreading.” Not “I miss you” but “I keep turning to say something to you and finding empty air.” The specificity is the point.

Step 4: Read old letters. If you have them, go back and read personal letters or messages from before you started using AI assistance. Notice the voice. Notice the idiosyncrasies. Notice how the imperfections make the writing feel alive in a way that polished text doesn’t. That voice is still in you. It just needs practice.

Step 5: Write a letter by hand. I know. I know. But handwriting engages a different cognitive process than typing, one that’s slower and more deliberate and completely immune to algorithmic suggestion. You don’t have to send it if you don’t want to. The act of writing it is the exercise.

I tried this program myself over the past two months. The first week without AI assistance was genuinely uncomfortable. I kept reaching for suggestions that weren’t there, pausing in the middle of sentences with no idea how to finish them. It was like trying to walk after someone removes the handrail you didn’t realise you’d been leaning on.

By the third week, something shifted. My messages became longer, stranger, more digressive. They were objectively worse writing — less concise, less grammatically consistent, more prone to tangents. But the people I sent them to noticed the difference immediately. “This sounds like you,” my partner said. Which, when you think about it, is the highest compliment a personal message can receive.

The Uncomfortable Truth

Here’s what I keep coming back to: we didn’t notice what was happening because the change was so gradual and the benefits were so immediate. Faster communication. Fewer errors. Less anxiety about finding the right words. These are real benefits, and I don’t dismiss them.

But somewhere along the way, we crossed a line from “tools that help us express ourselves” to “tools that express themselves through us.” And we crossed it so smoothly that most people don’t know it happened. The words still come from your keyboard. The sentiments still come from your heart. But the space between heart and keyboard — that generative, messy, profoundly human space where feelings get wrestled into language — has been colonised by systems that optimise for fluency at the expense of authenticity.

graph LR
    A["Feeling"] --> B["Struggle to<br/>Find Words"]
    B --> C["Imperfect but<br/>Authentic Expression"]
    A --> D["AI Suggests<br/>Template Phrase"]
    D --> E["Polished but<br/>Generic Expression"]
    C --> F["Recipient Feels<br/>Known"]
    E --> G["Recipient Feels<br/>Addressed"]
    style C fill:#9f6,stroke:#333
    style E fill:#f96,stroke:#333
    style F fill:#6f9,stroke:#333

This Valentine’s Day, consider doing something radical: write something terrible. Write something that doesn’t scan, that takes three attempts to say what you mean, that includes a metaphor so mangled it makes you wince. Write something that could only have come from you, because no algorithm would ever suggest it.

It won’t be as smooth as what the machine would have helped you produce. It might be awkward. It might be embarrassing. It will almost certainly be longer than necessary and shorter than the feeling deserves.

But it will be yours. And in 2028, that’s the most romantic thing you can offer someone: proof that a human being sat down, thought about them, and struggled — really struggled — to put it into words. No autocomplete. No suggestions. Just one imperfect person trying to say something true to another imperfect person.

That’s what love letters were always for. Not perfection. Presence.