Another quick post in a list of CTF techniques: filtering streams with tshark. tshark is the command line half of the packet capture tool Wireshark. The advantage here is it let’s you do all manner of filtering on the command line.
Another quick post in a list of CTF techniques: filtering streams with tshark. tshark is the command line half of the packet capture tool Wireshark. The advantage here is it let’s you do all manner of filtering on the command line.
As mentioned in my previous post, I recently participated in a security CTF exercise and wanted to write out a few interesting techniques.
This is the second: extracting SQL metadata from a SQLite database.
I recently participated in a security capture the flag (CTF) exercise through work. The goal was–in a wide variety of ways–to find a hidden string of the form flag{...}
somewhere in the problem. Some required exploiting sample websites, some parsing various data formats or captures, some required reverse engineering code or binaries, and (new this year) some required messing with LLMs.
As I tend to do for just about everything, I ended up writing up my own experiences. I won’t share that, since it’s fairly tuned to the specific problems and thus 1) not interesting and 2) probably not mine to share, but I did want want to share a few interesting techniques I found/used. If it helps anyone either defend against similar attacks in the real world or (more importantly 😄) someone comes across this while trying to solve a CTF of their own, all the better.
Okay, first technique: extracting data from a MongoDB database using search conditions.
Dependabot is … somewhat useful. When it comes to letting you know that there are critical issues in your dependencies that can be fixed simply by upgrading the package (they did all the work for you*). The biggest problem is that it can just be insanely noisy. In a busy repo with multiple Node.JS codebases (especially), you can get dozens to even hundreds of reports a week. And for each one, you optimally would update the code… but sometimes it’s just not practical. So you have to decide which updates you actually apply.
So. How do we do it?
Well the traditional rest based Github APIs don’t expose the dependabot data, but the newer GraphQL one does! I’ll admit, I haven’t used as much GraphQL as I probably should, it’s… a bit more complicated than REST. But it does expose what I need.
A very simple server that can be used to catch all incoming HTTP requests and just echo them back + log their contents. I needed it to test what a webhook actually returned to me, but I’m sure that there are a number of other things it could be dropped in for.
It will take in any GET/POST/PATCH/DELETE HTTP request with any path/params/data (optionally JSON), pack that data into a JSON object, and both log that to a file (with a UUID1 based name) plus return this object to the request.
Warning: Off hand, there is already a potential security problem in this regarding DoS. It will happily try to log anything you throw at it, no matter how big and will store those in memory first. So long running requests / large requests / many requests will quickly eat up your RAM/disk. So… don’t leave this running unattended? At least not without additional configuration.
That’s it! Hope it’s helpful.
Datadog is pretty awesome. I wish I had it at my previous job, but better late than never. In particular, I’ve used it a lot for digging through recent logs to try to construct various events for various (security related) reasons.
One of the problems I’ve come into though is that eventually you’re going to hit the limits of what datadog can do. In particular, I was trying to reconstruct user’s sessions and then check if they made one specific sequence of calls or another one. So far as I know, that isn’t directly possible, so instead, I wanted to download a subset of the datadog logs and process them locally.
Easy enough, yes? Well: https://stackoverflow.com/questions/67281698/datadog-export-logs-more-than-5-000
Turns out, you just can’t export more than 5000 logs directly. But… they have an API with pagination!
One of the more subtle bugs that a lot of companies miss is Server Side Request Forgery (SSRF). Like it’s cousin CSRF (cross-site request forgery), SSRF involves carefully crafting a request that runs in a way that the original developers didn’t expect to do things that shouldn’t be done. In the case of CSRF, one site is making a request on behalf of another in a user’s browser (cross-site), but in SSRF, a request is being made by a server on behalf of a client, but you can trick it into making a request that wasn’t intended.
For a perhaps more obvious example, consider a website with a service that will render webpages as preview images–consider sharing links on a social network. A user makes a request such as /render?url=https://www.google.com
. This goes to the server, which will then fetch https://www.google.com, render the page to a screenshot, and then return that as a thumbnail.
This seems like rather useful functionality, but what if instead, the user gives the url: /render?url=https://secret-internal-site.company.com
. Normally, company.com
would be an internal only domain that cannot be viewed by users, but in this case–the server is within the corporate network. Off the server goes, helpfully taking and returning a screenshot. Another option–if you’re hosted on AWS–is the AWS metadata endpoint: http://169.254.169.254/latest/meta-data/
. All sorts of interesting private things there. Or even more insidious, /render?url=file:///etc/password
. That shouldn’t work in most cases, since most libraries know better than to rener file://
protocol URLs, but… not always!
cyu’s Rack::Cors middleware is rather handy if want to control your CORS (Cross-Origin Resource Sharing) settings in a Ruby-on-Rails project. Previously, there was a fairly major issue where :credentials => true
was the default (which you generally do not want), but there were also some more complicated tweaks that I wanted to make.
One problem I recently had to deal with was wanting to:
Access-Control-Allow-Credentials
) to be sent for sibling subdomainsIf you have a website that allows users to submit URLs, one of the (many many) things people will try to do to break your site is to submit URLs that use the javascript:
protocol (rather than the more expected http:
or https:
). This is almost never something that you want, since it allows users to submit essentially arbitrary code that other users will run on click in the context of your domain (same origin policy).
So how do you fix it?
First thought would be to try to check the protocol:
> safe_url = (url) => !url.match(/^javascript:/)
[Function: safe_url]
> safe_url('http://www.example.com')
true
> safe_url('javascript:alert(1)')
false
As part of general security good practices, you should always (whenever possible):
secure
and http_only
HSTS
)