35% Discount on Residential Proxies for 9 months - use code WING35 at checkout

Get The Deal

How to Set a Proxy in Python Requests (2025)

Using Python to route your HTTP requests through a proxy can be a real revolution when you need your traffic to go through a middleman server. Setting up a proxy server with the Python Requests library is simple—just instruct it to route your requests through a proxy server. Whether you’re web scraping, trying to access stuff that’s blocked in your area, or just experimenting, knowing how to work with Python requests proxy server is super handy. Proxying your requests in Python helps keep you under the radar, prevents sites from blocking you, and even allows you to rotate your IPs for more privacy. In this Python scraping guide, we’ll go over everything—from jumping into a basic proxy set-up to more advanced stuff like proxy-auth set-up, session-set management, using environment variables, rotating proxy servers for scraping, picking the best proxy servers, and troubleshooting common issues.

Published:

25.04.2025

Reading time:

11 min

Quick Start: Setting Up a Proxy in Python Requests

Let’s get right into sending HTTP requests through a proxy in Python. First, you’ll want to set up the requests library, and then you can set it to make requests over a proxy server. It’ll also be a good chance to see how to set a username and password for proxy authentication in your Python code, too.

Installing the Requests Library

Install the requests library using pip if you haven’t already:

bash 
pip install requests

After installing, import it into your Python program (import requests) to use.

Basic Proxy Configuration in Python Requests

The Python requests library enables proxying HTTP requests in Python through a proxy dictionary definition. The dictionary is set up by protocols and backed by the proxy URL. Assuming your proxy server is at 123.45.67.89:8080, set it up like this in Python: Here we’ve passed a proxy dictionary to requests.get. This proxy server, once set, will capture your Python request and send it to the target site. The response to the target will be received by a different IP than yours, confirming that your Python request came via the proxy server:

python
import requests

# Define proxy server address for HTTP and HTTPS

proxies = {

 'http': 'http://<YOUR_PROXY_HERE>:8080',

 'https': 'http://<YOUR_PROXY_HERE>:8080'

}

response = requests.get('http://httpbin.org/ip', proxies=proxies)

print(response.text)

Note: In Python, always set both ‘http’ in the proxy’s dictionary. If you fail to set one and make a request to that kind of URL, that request won’t use the proxy.

Handling Proxy Authentication (Username & Password)

If you must log in to your proxy server, include the credentials in your Python proxy URL to ensure the request is authenticated properly. The syntax is:

perl
http://username:password@proxy_host:proxy_port

For example, with username myuser and password mypassword:

python

import requests

proxies = {

 'http': 'http://myuser:mypassword@<YOUR_PROXY_HERE>:8080',

 'https': 'http://myuser:mypassword@<YOUR_PROXY_HERE>:8080'

}

response = requests.get('http://httpbin.org/ip', proxies=proxies)

print(response.status_code)

We’ve added myuser:mypassword@ to the proxy server URL. The proxy server will send the connection and make the request if the credentials are correct. You’ll get a 407 “Proxy Authentication Required” error in case of an authentication failure. Check the username and password.

Using Python Requests with Different Proxy Setups

With the basics out of the way, let’s discuss some special cases. Let’s discuss the use of a persistent session with request-bound proxy connections, specifying proxies in environment variables, and a SOCKS proxy server.

Setting Up a Proxy Session for Persistent Connections

If you are going to make many requests to the same proxy, use a request. Session() to avoid having to specify the proxy configuration each time. A session also reuses a TCP connection, which will help with performance.

python
import requests

session = requests.Session()

session.proxies = {

 'http': 'http://<YOUR_PROXY_HERE>:8080',

 'https': 'http://<YOUR_PROXY_HERE>:8080'

}

response = session.get('http://httpbin.org/ip')

print(response.text)

In this example, we set up the proxies only once in a session. Any request made with session.get() or session.post() will default to the configured route. A session can be helpful in the case of web scraping when you want each request to make a consistent path over the same routing setup without repeatedly defining it.

Using Environment Variables for Proxy Configuration

Python Requests parses proxy settings from the environment variables, so don’t hardcode them. Configure HTTP_PROXY in your environment (export under Linux/MacOS, set in Windows):

bash
 
export HTTP_PROXY="http://<YOUR_PROXY_HERE>:8080"
export HTTPS_PROXY="http://<YOUR_PROXY_HERE>:8080"


If you have set these within your terminal, any Python program that utilizes the requests library will be proxied by default. For instance, typing:

python
 
import requests
response = requests.get('http://httpbin.org/ip')
print(response.text)

will use the proxy server in the environmental variables, even though we have not passed a proxy’s dictionary to code. This is useful to enable or disable the proxies for specific request scenarios without needs to change the script.

Handling HTTP, HTTPS, and SOCKS Proxies

By default, the requests library supports HTTPS proxies. For a SOCKS proxy (e.g., socks5:// for Tor or similar), you need to install support via:

bash
 
pip install requests[socks]
Then you can use a SOCKS proxy by specifying the scheme in the URL:
python
 
import requests
proxies = {
 'http': 'socks5://<YOUR_PROXY_HERE>:1080',
 'https': 'socks5://<YOUR_PROXY_HERE>:1080'
}
response = requests.get('http://httpbin.org/ip', proxies=proxies)
print(response.text)

This will proxy the request via a SOCKS5 proxy. You can supply a username and a password if your SOCKS proxy requires them by adding them in the URL like this. HTTP proxies are directly supported in requests, and SOCKS proxies are supported too if you include additional dependency. You’ve set up basic proxies, sessions, env-vars, and SOCKS proxies in Python. Next comes a deeper dive into rotating proxies for web scraping.

Rotating Proxies in Python Requests for Web Scraping

When web scraping in Python, too many requests sent by the same IP (even by a single external server) can lead to blockages. The solution is to use rotating proxies—change the connection endpoint (and use multiple IPs) in your Python script that is used to make different requests. Rotating different proxy servers by a collection of requests in your Python code makes every single request come from a different place. Possessing a list of proxy servers inside your Python scraper is helpful since each one of your requests is sent with a different IP. This reduces the chances of getting blocked when Python-based scraping at a mass level. In essence, rotating IP addresses is the secret to successful scraping.

Why You Need Rotating Proxies for Web Scraping

The sites are more likely to flag or slow traffic that is seen coming in from a single IP at a very intense level. Using a pool of proxies in Python helps because your requests are spread all over different IP addresses. It looks like different people are making the requests instead of a single person. Rotating between different proxy servers and their IP addresses per request in your Python scraper thus hugely reduces the chances of getting blocked whilst scraping massive data.

Implementing a Simple Proxy Rotation Script

You can rotate proxies by maintaining a list of proxy addresses (such as free proxy lists) and rotating between them with each request (round-robin or random). For example:

python

import requests, random

proxy_list = [

 "http://proxy1.example.com:8080",

 "http://proxy2.example.com:8080",

 "http://proxy3.example.com:8080"

]

for i in range(5):

 proxy = random.choice(proxy_list)

 proxies = { 'http': proxy, 'https': proxy }

 try:

 resp = requests.get("http://httpbin.org/ip", proxies=proxies, timeout=5)

 print(f"Request {i+1} via {proxy} -> {resp.text}")

 except requests.exceptions.ProxyError:

 print(f"Proxy {proxy} failed, trying another.")

In every pass of the loop, your Python code chooses a new random proxy for the request from proxy_list and employs it. The response will be the actual proxy’s IP. I’ve added Python-level timeout and error checking so that if the proxy fails, the code will skip over it and retry the request with another. In a Python production environment, you would likely also delete non-functional request proxies from the list so that they aren’t reused.

Using Proxy Pools to Avoid Blocks

Instead of managing proxies yourself in Python, consider a rotating proxy with an IP pool. Such services are in the business of providing many proxies and rotating them, so every request is sent with a new one from the pool. The more there are in your pool, the fewer requests each will get, and thus detection becomes less likely. By distributing requests over many different IP addresses, you vastly reduce the chances of any one of them becoming blocked.

Choosing the Best Proxies for Python Requests

When choosing proxies to use with requests in Python, the following should be considered for effective request routing and anonymity:

Paid vs. Free: Free proxies are easily accessible but slow, unstable, or already blocked by a very large quantity of sites. Paid service provider sellers have faster and more stable Python-friendly paid proxies. In a situation involving a large Python scraping project or bulk use, good-quality proxies need to be purchased to ensure stable request delivery at scale.

Datacenter vs. Domestic: Datacenter proxies are cloud or datacenter provider proxies. They are more accessible and faster but are blockable and detectable instantly due to the known addresses. Domestic proxies are ISP (real user connection) proxies and are much harder to identify to sites and are thus suitable for Python-based scraping that involves frequent or high-volume requests. The only drawback is that the Python-ready residential option is costly.

Ease of use & Authentication: Define what authentication methods are offered by the provider (i.e., username/password basic auth or IP whitelisting) and that it is easy to include the request proxy’s parameter in each outbound request. The easier it is to set up (i.e., all traffic over a single proxy endpoint), the easier your Python development will be.

Troubleshooting Proxy Issues in Python Requests

Despite a proper Python setup, request-handling proxies will still give you issues. Some of the common issues and the corresponding solutions are set out below:

• Connection errors/timeouts: When there is no connection to a request, the connection is either down or not properly set up in your Python code. Try a different server address.

• SSL Certificate Errors: HTTPS tunneling can cause SSL validation errors. As a workaround when debugging in Python, pass verify=False in your call to avoid such errors and to let the request proceed without certificate checks. Use this only when Python-level debugging because this disables the certificate security validation.

By checking these issues and fixing them, you will avoid most issues with proxies in your Python requests and conduct your scraping or API requests with ease.

Conclusion

Python proxies with the request’s library enable you to control your trace IP when making HTTP requests. We have covered configuring setup in requests, authentication credentials, setting up sessions with environmental variables, and IP rotation to avoid getting blocked and maintain request continuity. Touching upon choosing good proxies for request success and debugging, with all those methods you can continue with your Python requests proxy set for web scraping, tests, or anything. Always be ethical with your use of your proxies and stay within the websites that you are targeting with your requests.

Related posts

Have any questions?