Skip to main content

Long polling: What it is and when to use it

Alison Gunnels author pic
Alison Gunnels
  • Tutorial Type: Basics
  • Reading Time: 10 min
  • Building Time: N/A
Chat SDK v4 2x

Swift, Kotlin, and TypeScript SDKs

Build in-app chat, calls, and live streaming

Imagine you’re trying to have a conversation over chat - what if you constantly have to reload the chat to see if there’s a new message? It would be frustrating, right? Fortunately, you don’t have to deal with this; communication apps are designed to automatically fetch data when necessary. One method these apps use is long polling, a fast and efficient connectivity option.

In this article, you’ll learn all about long polling, including why it’s important, as well as the difference between long polling and short polling. You’ll also explore related technologies like server-sent events (SSE) and WebSockets.

What is long polling?

Long polling is a web communication technique in which a client requests information from a server, and the server holds the connection open until it has new data to send. Unlike regular polling, in which the client repeatedly sends requests at fixed intervals, long polling reduces the frequency of requests and server load by keeping the connection open longer. When the server has new information, it sends the data and closes the connection, prompting the client to immediately re-establish the connection and wait for more updates. This method effectively simulates real-time communication without the need for continuous request cycles. It's commonly used in applications, such as in-app chat, in which real-time updates are crucial, but maintaining a constant open connection (like WebSockets) isn't feasible.

A flow chart demonstrating long polling
Adapted from source

Long polling example

In an in-app chat use case, long polling can be used to deliver new messages promptly without frequent requests to the server. When a user opens the chat, their app sends a long polling request to the server, essentially saying, "Tell me if there are any new messages." The server holds this request until a new message arrives. Once a message is available, the server sends it to the client, and the client immediately sends another request to wait for the next update. This process ensures that users receive new messages almost instantly while minimizing the number of requests sent to the server, providing a smooth and efficient chat experience.

In a nutshell, long polling reduces the number of requests needed to provide near-instant updates, lowering network overhead and minimizing latency when compared to short polling. Long polling provides the responsiveness needed for conversational applications like chat and collaboration tools.

How long polling works

In long polling, you begin with the client request:

How long polling works
How long polling works

Once the server receives the request, it keeps the connection open until new data is available or a timeout occurs. Resources are allocated to maintain the connection, which can impact server performance if not managed properly. However, proper optimization techniques (such as connection pooling) help mitigate this issue.

When there is new data, the server responds to the client with the requested information and closes the connection. If there is no new data, the server sends an empty response upon timeout. The empty response signals the client to make a new GET request. The client then immediately sends a new request to the server, essentially repeating the cycle.

This allows for near-real-time updates while still operating over conventional HTTP connections.

We have seen what long polling is and how it works. Why is it so important in applications today?

Why long polling is important

There are several reasons why long polling is especially useful today:

  1. Middle ground for low-latency comms: Long polling is the middle ground between traditional request-response models and real-time applications. It provides low-latency communication using a simple and familiar protocol: HTTP.

  2. Speed: When compared to short polling, long polling is faster. The client makes fewer requests, and those open requests allow the server to provide near-real-time updates.

  3. Testing: If you’re looking to try new ideas and concepts without the complexity of advanced technologies, long polling is a great option. For instance, long polling can help you determine whether an Internet of Things (IoT) sensor is providing the correct output for a web-based application. You can also test real-time updates of turn-based games or concurrent editing programs.

  4. Small projects: For projects with limited real-time communication needs, long polling can be both effective and simple to implement. A limited amount of real-time communication lowers the potential to overwhelm the server while offering a fast, responsive service.

  5. Ease of building: Long polling applications can be built quickly with short scripts in many familiar languages, including Python, JavaScript, Ruby, and Java.

However, there are trade-offs as well. Open connections are resource-intensive for the server, and if a server isn’t built to accommodate this, the model can be slowed. That defeats the purpose of running a highly responsive service.

Let’s now discuss the various methods for maintaining client-server communication. Each approach aims to achieve near real-time data exchange, albeit in different ways. We’ll start by explaining the differences between HTTP long polling and short polling.

Long polling vs. short polling

The distinction between long polling and short polling lies in how the client handles the delay.

In short polling, the client sends regular requests at periodic intervals, which can lead to a high number of HTTP requests and increased server load. In long polling, the client waits until the server has new information before the server closes the connection, and the client reinitiates the request. Long polling effectively reduces the number of requests and helps utilize server resources.

The table below summarizes the differences between HTTP long polling and short polling.

A table that shows long polling vs. short polling features

Keep in mind that implementing long polling can be more complex than implementing short polling due to the persistent nature of the connection. Developers need to consider various factors, such as connection management, fallbacks for connection issues, and handling timeouts. Despite these complexities, the benefits of reduced latency and improved performance make it a worthy endeavor for many applications.

Moreover, since the server holds the connection open until it has new information, long polling tends to be more bandwidth-efficient than short polling, which involves frequent checks. However, optimization techniques like data compression play a crucial role in maintaining minimal bandwidth usage. If you allow too many ongoing connections or connections with no set timeouts, then your long polling mechanism may not provide a great reduction in bandwidth usage from short polling. You can implement configurations to detect unnecessary open connections, time out old connections, and throttle the number of connections for your preferred balance of responsiveness and bandwidth conservation.

How to optimize long polling

If you include long polling in your client/server communication options, you can minimize server-side resource issues with configurations that minimize resource consumption. These options can minimize unnecessary connections and reduce the load of fetching and sending data:

  1. Connection timeouts: Implement sensible connection timeouts to avoid resource wastage. This helps close idle connections promptly, ensuring that resources are available for active users and processes.

  2. Throttling and rate limiting: Limit the amount of bandwidth that long polling can use to prevent excessive requests from overwhelming the server.

  3. Connection pooling: Maintain a pool of reusable connections. This approach reduces the overhead associated with establishing new connections and ensures quicker response times.

  4. Heartbeat messages: Send periodic “heartbeat” signals to confirm that the client is still connected and responsive.

  5. Chunking data: Before transmission, break down large pieces of data into smaller, manageable chunks. When the server can send data in smaller increments, the client starts processing data without waiting for the entire payload.

  6. Caching: Cache frequent requests and responses to reduce the load on your server and speed up client response times.

  7. Scalability and load balancing: Distribute incoming requests across multiple servers to improve performance and ensure no single server becomes overwhelmed.

  8. Async operations: By using background workers for slow tasks, you can keep the main thread free, ensuring that your application remains responsive even during heavy processing periods.

  9. Batched responses: Wherever possible, collect multiple updates and send them at once. This reduces the frequency of requests and responses, thus enhancing performance and reducing server load.

  10. Selective polling: Consider using selective polling to minimize unnecessary data transfer. Simply put, selective polling allows the server to respond only when specific conditions are met. This approach prevents the server from continuously polling for changes, which can help reduce server load and bandwidth consumption and improve overall performance.

  11. Retry mechanism: Develop a robust retry mechanism to handle failed requests and ensure continuous real-time updates despite network disruptions.

  12. Compression: Compressing data before sending lowers latency and reduces the amount of bandwidth consumed.

How to implement long polling

To implement long polling, you can either write a script from scratch or implement it with libraries and frameworks in (for example) JavaScript or Java. The decision on how you implement it is ultimately based on your available tools and the environment in which you will host the long polling function.

The following examples are not exhaustive, but they are representative of popular options.

Long polling implementation with HTTP

Implementing long polling with HTTP is a from-scratch implementation in Python and is available in many environments. The script provided here is a basic implementation without the optimization and security features listed in the article. Keep in mind that this is only an example meant to demonstrate the simplicity of long polling and should not be used in production.

A few notes before you review the code:

  • The demonstrated connection is to the mediastack news server, which provides new headlines every ten seconds. If you copy-paste and run the script, changing only the API key, you should receive a list of events.

  • This script does not work with online Python sandboxes, such as Online Python Compiler or Python Tutor, because they do not allow responses from the server.

Before running, make sure you sign up for a free API key from mediastack and insert it before the API call. There are several methods for doing so; an environment call is discussed around 2:14 in this video by NeuralNine.

Long polling implementation with libraries and frameworks

You can also implement HTTP long polling using premade libraries and frameworks. Client-side and server-side libraries are available.

When you choose to implement long polling with libraries and frameworks, consider options for timeout management, reconnection handling, and error handling. Also, take into consideration which browsers (if any) are most likely used by your target audience. Both of these considerations can help determine which library you choose to use.

JavaScript offers two useful client-side libraries: Pollymer and jQuery.

  1. Pollymer is a general-purpose AJAX library with features designed for long polling applications. It offers important improvements, such as managing request retries and browser compatibility.

  2. jQuery was not specifically created to improve long polling, but jQuery’s AJAX capabilities make long polling easier to implement.

Server-side libraries in Go and Java are also available to assist you in the implementation of long polling. Golang Longpoll is a Go library for creating HTTP long poll servers and clients. It has features like event buffers and automatic removal of expired events, which lower the burden of the open connection. The Spring Framework has libraries and tools for long polling within Java applications, such as DeferredResult.

Let’s now discuss one important consideration of polling: security.

Security considerations for long polling

As with almost any communication protocol, there are security implications you need to take into account. Thankfully, if you follow good security practices, many of these concerns are easy to overcome:

  • Encryption: Use HTTPS to encrypt data in transit between the client and the server. This prevents potential eavesdropping or man-in-the-middle attacks and protects data integrity and confidentiality.

  • Access management: If the data has confidentiality requirements or if you are concerned about distributed denial-of-service (DDoS) attacks, you should implement authentication and authorization mechanisms. This limits the initiation of long polling requests to verified users, and it validates requests on the server side.

  • Rate limiting: This helps you protect your server from DDoS attacks and prevents resource starvation by limiting the number of connections per source per chosen timeframe (e.g. five connections from the same source per second).

These considerations are not the only good security practices for long polling. Consider the same precautions you would use for any other HTTP-enabled services, such as content limitations and validation of the transmission sources. Sometimes, the APIs themselves will require these or other security features.

Long polling in the real world

Long polling remains a useful technique in the toolkit of real-time communication methods, especially when newer technologies like WebSockets or SSE might not be feasible.

Long polling is the simplest and most widely supported solution for important communication and update operations like business messaging and chat programs. You can use long polling as the primary method of communication, or you could use it as a backup to WebSockets or SSE. Either implementation ensures you have chosen a protocol and functionality that almost every information system supports.

Long polling is particularly beneficial for projects at a smaller scale or during the prototyping phase when immediate optimization isn’t the priority. However, you should always implement proper security measures and optimize your infrastructure to make the most of this (or any) communication method. Once you are certain that long polling is the way forward for your application, spend time getting to know the additional ways to make the system more secure and more functional.

When your business decides that the time is right to build in-app communication such as chat, calls, or omnichannel business messaging, Sendbird is ready to provide customer communications solutions - including a chat API, a business messaging solution, and fully customizable AI chatbot - that you can build on. You can send your first message today by creating a Sendbird account to get access to valuable (free) resources with the Developer plan. Become a part of the Sendbird developer community to tap into more resources and learn from the expertise of others. You can also browse our demos to see Sendbird Chat in action. If you have any other questions, please contact us. Our experts are always happy to help!