Skip to content

Solving Markdown Newline Issues in LLM Stream Responses

Introduction

When building applications integrated with Large Language Models (LLMs), streaming responses represent an important technology for enhancing user experience. Streaming allows LLM-generated content to be displayed in real-time on the user interface, creating a typewriter effect, rather than waiting for the entire response to complete before displaying it all at once.

However, during the implementation of this feature, I encountered a seemingly simple yet confusing issue: Markdown-formatted content returned by LLMs lost its newline characters when displayed in the frontend, causing Markdown formatting errors and affecting both readability and display quality.

This article will analyze the technical reasons behind this issue in detail and share an elegant solution to help developers correctly handle and preserve Markdown formatting in streaming responses.

Problem Analysis: Why Are Newlines Lost?

Symptoms

In my application, LLM responses contained Markdown-formatted content such as code blocks, lists, and paragraphs. In the backend service, these contents had complete formatting, including necessary newline characters (\n). However, when the content was transmitted to the frontend via streaming and rendered, many places that should have line breaks were replaced with spaces, resulting in confused Markdown formatting.

For example, a properly formatted code block like this:

python
def hello_world():
    print("Hello, World!")

Might be rendered in the frontend as:

markdown
```python def hello_world():    print("Hello, World!") ```

This obviously loses the structure of the code block, causing Markdown parsing errors.

Technical Cause Analysis

After systematic investigation, I found that the problem stemmed from the interaction between frontend using EventSource to receive streaming data and backend using \n\n to split LLM response messages.

EventSource Basics

EventSource (also known as Server-Sent Events, SSE) is a Web API that allows clients to receive updates pushed from a server. It establishes a one-way channel through which the server can continuously send messages to the client.

The basic format of EventSource communication is:

data: message content\n\n

Where \n\n (two newline characters) is used to separate different messages. This is a requirement of the EventSource protocol; the server must end each message with \n\n.

Root Cause

When Markdown content generated by LLMs contains two consecutive newline characters (\n\n), it conflicts with the EventSource message delimiter. During EventSource parsing, these consecutive newline characters are incorrectly interpreted as message boundaries rather than part of the content.

Specifically, when the EventSource client receives data in the following format:

data: This is the first paragraph\n\nThis is the second paragraph\n\n

It will parse it as two separate messages:

  1. This is the first paragraph
  2. This is the second paragraph

Rather than a single message containing paragraph separations.

This causes the crucial newline characters in Markdown to be incorrectly processed, breaking the format structure, especially for Markdown elements that rely on newlines, such as code blocks, lists, and paragraphs.

Solution: Serializing and Deserializing Newlines

To address this issue, I designed a serialization and deserialization scheme based on placeholder substitution.

Technical Approach Overview

  1. In the backend service, replace all \n newline characters in LLM responses with a custom placeholder (such as <|newline|>)
  2. Send the replaced content to the frontend via EventSource
  3. After the frontend receives the content, replace the placeholders back to \n newline characters before rendering

This method is similar to a serialization and deserialization process, ensuring that newline information is not lost or misinterpreted during transmission.

Backend Implementation

python
# Function to process LLM responses
def process_llm_response(llm_response):
    # Replace newline characters with custom placeholders
    processed_response = llm_response.replace("\n", "<|newline|>")
    
    # Send the processed response via EventSource
    return f"data: {processed_response}\n\n"

Frontend Implementation

javascript
// Create EventSource connection
const eventSource = new EventSource('/api/llm-stream');

// Receive and process messages
eventSource.onmessage = (event) => {
    // Replace placeholders back to newline characters
    const content = event.data.replace(/<\|newline\|>/g, '\n');
    
    // Render content using a Markdown rendering library
    renderMarkdown(content);
};

Solution Advantages

  1. Complete Format Preservation: Ensures all Markdown formatting elements (including code blocks, lists, etc.) are correctly rendered in streaming responses
  2. Simple Implementation: Involves only string replacement operations, simple to implement without modifying existing architecture
  3. Universal Applicability: Suitable for any scenario using EventSource to transmit Markdown content
  4. Minimal Performance Impact: String replacement operations have very little overhead and won't significantly affect performance

Extended Considerations: Handling Other Special Characters

When processing LLM responses, besides newline characters, there may be other special characters to consider. For example:

  1. Special Character Escaping: Certain special characters may require different escape handling in different environments
  2. Internationalization Characters: Special character handling in different language environments
  3. Control Characters: Some invisible control characters may cause unexpected behavior

For these situations, the placeholder scheme can be extended to create a more complete serialization/deserialization mechanism, ensuring all special characters are correctly handled during transmission.

Conclusion

When developing LLM streaming response functionality, Markdown newline character loss is an easily overlooked issue that can seriously affect user experience. By analyzing how EventSource works, we identified the root cause and proposed a solution based on placeholder substitution.

This approach cleverly resolves the conflict between newline characters and EventSource message delimiters, ensuring the integrity of Markdown formatting during streaming transmission. This serialization and deserialization approach can also be applied to solve other similar data transmission format preservation issues.

For teams developing LLM applications, understanding and solving seemingly simple but actually complex formatting issues like this can significantly improve the professionalism and user satisfaction of the final product.

References

  1. MDN Web Docs: EventSource
  2. Markdown Guide: Basic Syntax
  3. Using Server-Sent Events
  4. Character Encoding in JavaScript

License

This article is licensed under CC BY-NC-SA 4.0 . You are free to:

  • Share — copy and redistribute the material in any medium or format
  • Adapt — remix, transform, and build upon the material

Under the following terms:

  • Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
  • NonCommercial — You may not use the material for commercial purposes.
  • ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.

Last updated at: