An automated tool which can simultaneously crawl, fill forms, trigger error/debug pages and “loot” secrets out of the client-facing code of sites.
Usage
To use the tool, you can grab any one of the pre-built binaries from the Releases section of the repository. If you want to build the source code yourself, you will need Go > 1.16 to build it. Simply running go build will output a usable binary for you.
Additionally you will need two json files (lootdb.json and regexes.json) alongwith the binary which you can get from the repo itself. Once you have all 3 files in the same folder, you can go ahead and fire up the tool.
Video demo:
Here is the help usage of the tool:
<div class="highlight highlight-source-m68k notranslate position-relative overflow-auto" data-snippet-clipboard-copy-content="$ ./httploot –help _____ )=( / \ H T T P L O O T ( $ ) v0.1 \___/ [+] HTTPLoot by RedHunt Labs – A Modern Attack Surface (ASM) Management Company [+] Author: Pinaki Mondal (RHL Research Team) [+] Continuously Track Your Attack Surface using https://redhuntlabs.com/nvadr. Usage of ./httploot: -concurrency int Maximum number of sites to process concurrently (default 100) -depth int Maximum depth limit to traverse while crawling (default 3) -form-length int Length of the string to be randomly generated for filling form fields (default 5) -form-string string Value with which the tool will auto-fill forms, strings will be randomly generated if no value is supplied -input-file string Path of the input file containing domains to process -output-file string CSV output file path to write the results to (default "httploot-results.csv") -parallelism int Number of URLs per site to crawl parallely (default 15) -submit-forms Whether to auto-submit forms to trigger debug pages -timeout int The default timeout for HTTP requests (default 10) -user-agent string User agent to use during HTTP requests (default "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:98.0) Gecko/20100101 Firefox/98.0") -verify-ssl Verify SSL certificates while making HTTP requests -wildcard-crawl Allow crawling of links outside of the domain being scanned” dir=”auto”>
$ ./httploot --help
_____
)=(
/ \ H T T P L O O T
( $ ) v0.1
\___/[+] HTTPLoot by RedHunt Labs - A Modern Attack Surface (ASM) Management Company
[+] Author: Pinaki Mondal (RHL Research Team)
[+] Continuously Track Your Attack Surface using https://redhuntlabs.com/nvadr.
Usage of ./httploot:
-concurrency int
Maximum number of sites to process concurrently (default 100)
-depth int
Maximum depth limit to traverse while crawling (default 3)
-form-length int
Length of the string to be randomly generated for filling form fields (default 5)
-form-string string
Value with which the tool will auto-fill forms, strings will be randomly generated if no value is supplied
-input-file string
Path of the input file conta ining domains to process
-output-file string
CSV output file path to write the results to (default "httploot-results.csv")
-parallelism int
Number of URLs per site to crawl parallely (default 15)
-submit-forms
Whether to auto-submit forms to trigger debug pages
-timeout int
The default timeout for HTTP requests (default 10)
-user-agent string
User agent to use during HTTP requests (default "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:98.0) Gecko/20100101 Firefox/98.0")
-verify-ssl
Verify SSL certificates while making HTTP requests
-wildcard-crawl
Allow crawling of links outside of the domain being scanned

