How to Use Wget With Proxy: A Complete Guide
File downloads from the command line are common among data enthusiasts, system administrators, and developers. For downloading content without interaction, Wget is a well-liked application.
Published:
29.05.2025
Reading time:
13 min
The following instructions will walk you through the process of installing Wget on every major operating system, downloading content using the basic wget commands, and using advanced wget proxy settings for various scenarios. Using authentication when an intermediary server requires authentication, configuring Wget for HTTP and HTTPS connections, and more.
What is Wget?

Wget is a free, open-source command-line tool that downloads files from the web. The name “Wget” comes from “World Wide Web” + “get.” It’s part of the GNU Project and runs on Linux, macOS, and Windows. It supports downloading via common internet protocols, including HTTP, HTTPS, and FTP protocols.
How to Install Wget
On Linux
Many Linux distributions include Wget by default. You can check if it’s already installed by running wget –version in your terminal. If Wget is not found, install it using your distribution’s package manager:
- Debian/Ubuntu: Update your package list and install Wget with sudo apt update && sudo apt install wget.
- Fedora/CentOS/RHEL: Use your package manager (dnf or yum) to install: sudo dnf install wget.
Once installed, you can run it from the command line. (If it’s still not found, you might need to add it to your PATH, but on most systems the package manager does this automatically.)
On macOS
macOS doesn’t come with Wget by default, but you can easily add it. The most straightforward method is using Homebrew, a popular package manager for macOS:
- Install Homebrew if you haven’t already (you can do so by running the Xcode Command Line Tools installer with xcode-select –install, then the Homebrew setup script from the Homebrew website).
- After Homebrew is set up, install Wget by running: brew install wget
Once that is completed, run wget –help or wget –version in Terminal to verify that it is installed and working.
On Windows
Windows does not include Wget, but you can download a pre-compiled Windows binary easily:
- Visit the official GNU Wget downloads page (for example, the eternallybored.org link) and download the latest wget.exe for Windows.
- Copy the wget.exe file into a directory that’s in your system’s PATH (for convenience, you can use C:\Windows\System32\). This makes command available everywhere in Command Prompt.
- Open Command Prompt and run wget –help to confirm it works. You should see help text, indicating it’s installed correctly.
Now you’re ready to use Wget on Windows! (Tip: you may want to open a new Command Prompt or log off and on after adding to PATH, to ensure the system recognizes the new executable.)
Wget Basics and Common Use Cases
Before configuring intermediary servers, it’s important to understand how Wget is typically used. Below are some of the most common use cases and command examples.
Wget Command Syntax
The basic syntax for using Wget is:
wget [options] [URL]
You invoke command followed by any desired options and the target URL to download. You can combine multiple options to control how Wget runs. Next, we’ll look at a few practical examples of Wget in action.
Downloading a Single File
Downloading a file with Wget is straightforward. For example, to download a file from a given URL, you would run:
wget https://example.com/sample-file.zip
Wget will connect to the server and save the file sample-file.zip in the current directory. By default, it uses the original filename from the URL. You can also specify a different output filename using the -O option. For instance:
wget -O myfile.zip https://example.com/download/latest.zip
This will save the downloaded content as “myfile.zip” regardless of its original name.
Downloading Multiple Files
You can download multiple files by listing multiple URLs after the command, for example:
wget URL1 URL2 URL3
Wget will download each of those files in turn. If you have dozens of URLs, a better approach is to put them in a text file (one URL per line) and use the -i option to tell Wget to read the list. For example, wget -i urls.txt will fetch every URL listed in urls.txt.
Setting a Custom User-Agent
Some web servers or download scripts might block default user agent (which identifies itself as “Wget/version”) or they might return different content based on the client. To avoid detection or bypass simple filters, you can spoof the user agent string. For example:
wget --user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64)" https://example.com/
This makes Wget pretend to be a modern web browser when it requests the page. You can use any string you want as the user agent. Setting a custom user agent is especially useful when downloading content from websites that block automated clients. If you plan to use a custom agent often, you can also add a default user_agent setting in your configuration files (the .wgetrc file).
Limiting Download Speed
If you don’t want Wget to use your full bandwidth or you need to be gentle on the server, you can limit the download speed. Use the –limit-rate option followed by the maximum transfer rate. For example:
wget --limit-rate=200k https://example.com/large-file.iso
This will cap the download at 200 KB/s (kilobytes per second). You can use k for kilobytes or m for megabytes (e.g., –limit-rate=2m for ~2 MB/s). Throttling the speed is useful to prevent Wget from saturating your network or to avoid hitting server rate limits, making it less likely that your IP gets flagged for excessive downloading.
Extracting Links from a Webpage
Wget isn’t just for saving files—you can also use it to find what URLs a page links to. For example, you can use Wget’s spider mode to extract links. The following command crawls a page and prints all URLs it finds:
wget --spider --force-html -r -l1 https://example.com 2>&1 | grep -o 'http[^ ]*'
Here, –spider makes Wget traverse the page without downloading files, and the output is filtered with grep to show only the URLs. This way you can get a quick list of links on a webpage. If you wanted to download those resources, you would omit –spider.
Using Wget with Proxies
One of Wget’s strengths is its ability to work with intermediary servers. Beyond environment variables and config files, you can also specify your wget proxy command line options directly for quick, one-off tasks. Below, we’ll show you how to set an intermediary server, including authentication, and troubleshoot errors.
How to Set a Proxy for Wget
Wget can read proxy settings from the environment or config files, or you can specify them per invocation. For a basic wget http proxy setup, you can use environment variables such as:
- Environment variables:
export http_proxy="http://proxy.example.com:8080"
export https_proxy=http://proxy.example.com:8080
These variables tell Wget which server to use for those HTTP and HTTPS requests. Once set (for example, you could put these in your ~/.bashrc), any command will automatically use it. (On Windows, you can set equivalent variables in the System Environment or using set in the Command Prompt.)
- Wget configuration file: You can also put the intermediary server settings in config file so they apply every time. For a single user, edit (or create) the file ~/.wgetrc in your home directory. For system-wide settings, edit /etc/wgetrc. Add lines such as:
http_proxy = http://proxy.example.com:8080
https_proxy = http://proxy.example.com:8080
Save the file. Now Wget will use that setting by default for all HTTP and HTTPS downloads (until you change or remove these lines).
- Command-line option:
wget -e use_proxy=yes -e http_proxy=http://proxy.example.com:8080 http://example.com/file.pdf
This command forces Wget to use the given server for that session. This is handy for occasional use or scripting one-off tasks.
By using any of these methods—environment variables, config files, or command-line options—you can tailor your settings to fit your specific network environment.
Proxy Authentication (Username & Password)
Often, intermediary servers (especially corporate or paid services) require authentication – in other words, you must provide a username and password to use them. Credentials are only needed if the intermediary server specifically demands them—otherwise, you can leave those settings blank. If your intermediary server needs authentication, Wget will return a “407 Proxy Authentication Required” error unless you supply credentials. Here’s how to fix it:
- Provide your intermediary server login details: Ensure you’ve included your intermediary server username and password in your settings. Review the methods in the “Proxy Authentication” section above – for example, use the http_proxy environment variable with username:password@ in the URL, or use the –proxy-user and –proxy-password options.
- Use supported authentication schemes: Wget only works with Basic HTTP authentication for intermediary servers. If the intermediary server is using an authentication method it doesn’t support (such as NTLM or Kerberos), you will continue to get 407 errors. In that case, consider using an alternate tool like cURL.
Best Protocols to Use with Wget (HTTP, HTTPS, SOCKS)
HTTP and HTTPS: Stick to HTTPS or HTTP proxy servers. These cover virtually all web download scenarios. Just set your http_proxy and https_proxy as described above, and Wget will handle your HTTP and secure HTTPS connections through the intermediary server.
SOCKS: Wget does not support SOCKS4/5 intermediary server protocols directly. If your only intermediary server option is a SOCKS proxy (for example, Tor’s network), it won’t be able to use it. You would need to run an intermediate tool or use a different downloader. For instance, you could use cURL instead – it has native SOCKS intermediary server support – or a utility like proxychains to funnel Wget through a SOCKS socket.
Using Rotating Proxies with Wget
Rotating intermediary servers automatically change the IP address used for each of your requests. This is useful if you are scraping or making many requests, as it helps avoid IP-based blocking.
Free IP Rotation Options
There are some free ways to rotate IP addresses, but they come with drawbacks. You could gather a list of public intermediary servers and have a script cycle through them for each command. However, free intermediary servers are often unreliable, slow due to overload, and potentially unsafe.
Using Premium Proxies to Avoid Blocks
Using a rotating proxy service with Wget is no different from using any other intermediary server. The provider will supply you with a intermediary server address, port, and login credentials. You plug those into your settings (as described earlier). For example, you might set http_proxy=”http://user:pass@us.rotating-proxy.example.com:8000″ and then run it normally. In this example, each request through us.rotating-proxy.example.com:8000 would automatically use a different IP from the provider’s pool. While these services typically need a subscription, they can save you a lot of hassle by providing stable, ready-to-use rotating IP addresses.
Troubleshooting Wget Proxy Errors
Fixing 407 Proxy Authentication Required
If you see an error like intermediary server request sent, awaiting response… 407 “Proxy Authentication Required”, it means the intermediary server is rejecting the request due to missing or wrong credentials. In short, the intermediary server is asking you to authenticate (HTTP 407 status). Here’s how to fix it:
- Provide the login details: Ensure you’ve included your username and password for the intermediary server in your configuration. Use one of the methods described above (for example, http_proxy with credentials in the URL, or –proxy-user and –proxy-password flags). Double-check that your credentials are correct.
- Use Basic authentication (or an alternative tool): Wget only supports Basic HTTP authentication with intermediary servers. If your intermediary server uses a different scheme (like NTLM/SSPI on Windows), it will not authenticate successfully.
Solving 400 Bad Request with Proxy
Encountering a 400 Bad Request error when using a intermediary server is another issue that can crop up. This error means the server could not understand request. “400 Bad Request” usually points to a misconfiguration in your configuration. Here are steps to troubleshoot and fix it:
- Check your intermediary server address: Ensure the intermediary server URL is correctly formatted (including the http:// or https:// prefix and the right port number). A misformatted intermediary server string is the most common cause of a 400 error.
- Match the intermediary server to the protocol: If you’re accessing an HTTPS site, set the HTTPS_PROXY variable (or config) – not just HTTP_PROXY. Similarly, if your intermediary server requires a username/password, verify that you’ve included those properly in the URL (e.g. http://user:pass@proxy.example.com:8080).
Wget vs. cURL: Key Differences
Both Wget and cURL are powerful command-line tools for web transfers, but they have different strengths. Here’s a quick comparison to help you choose:
- Recursive downloads: Wget excels at recursively downloading websites or multiple files automatically. It can follow links and create local mirrors of sites. cURL does not have built-in recursion – you’d need to script that manually if using cURL.
- Usage focus: Wget is primarily used for downloading files or entire pages. On the other hand, cURL is more versatile; it can download or upload data and is often used for APIs and web services (e.g., sending POST requests).
- Proxy and authentication: Both tools support HTTP and HTTPS proxy. However, cURL offers more proxy – for instance, it can use SOCKS5 servers and handle advanced authentication (such as NTLM) – which Wget cannot.
Conclusion
Using Wget with proxy servers opens a lot of possibilities for safer and more flexible web scraping and downloads. In this guide, we covered how to install and use Wget, from basic commands to advanced proxy configurations like rotating IP addresses. You learned how to configure Wget to work with both simple and authenticated intermediary servers, how to avoid common errors (like 407 and 400), and when you might choose Wget over cURL (and vice versa).
Related posts