ParamSpider – Mining parameters from dark corners of Web Archives

Have you ever wondered how to scrape all the parameters from domain and subdomains from the past without manually crawling the Waybackmachine? Good news, there is a solution! ParamSpider (https://github.com/devanshbatham/ParamSpider) is a new open-source tool that mines parameters from web archives without interacting with the target host. To find parameters, this tool is using various techniques and wordlists.
ParamSpider - Mining parameters from dark corners of Web ArchivesWith additional parameters found, you need to be aware of the fact that most of them will be false positives as the website has probably changed. However, any new parameters are great for penetration testers or bug bounty hunters because they can test them for various vulnerabilities such as open redirect, cross-site scripting, SQL injection, SSRF and others.

Installation and example run (requires python version 3.7+):

git clone https://github.com/devanshbatham/ParamSpider

cd ParamSpider

pip3 install -r requirements.txt

python3 paramspider.py --domain demo.com

ParamSpider - Mining parameters from dark corners of Web ArchivesFigure 1 output from ParamSpider

Author of this tool also recommends to filter out the „juicy“ parameters from others using GF (https://github.com/tomnomnom/gf). GF is a wrapper around grep that can be used to avoid typing common patterns. In the path „/ParamSpider/tree/master/gf_profiles“ there are GF profiles located, such as redirect (for potential parameters with open redirect or SSRF vulnerabilities), xss, wordpress and others.