Subscribe on the Bewust website to stay informed and receive the newsletter once a week.
]]>By now every larger organisation incorporates CTI in some fashion to improve security decision-making and detection. In that sense we have all heard CTI is an activity rather than something you buy, but it is time- and labour-intensive. Most of us don’t have dedicated resources to build threat landscapes and assessments. How to produce actionable intelligence when time and resources are scarce? In a position paper 1 on CTI we have pointed out the pitfalls of CTI, and in this article we provide some guidance on how to avoid them.
Creating actionable intelligence Intelligence is mere information if it isn’t actionable. We need interpretation. Interpretation facilitates action and distinguishes intelligence from mere data. Interpretation takes the organisation’s context into account and makes it actionable and useful input for decision-making. While many industry entities promise to deliver accurate CTI off the shelf, these offerings only work as fancy blacklists without in-house interpretation. An organisation’s CTI function will usually cater towards various internal stakeholders, such as C-level executives, security operations, vulnerability management and IT/security architects. All of these have different contexts and different views on actionability. On the one hand, we almost drown in CTI given the volume of threat reports and indicators pushed through feeds daily. On the other hand, internal data remains mostly untapped as potential sources of CTI. Regardless of our individual CTI priorities, we suffer from information overload. How to deal with this?
Actioning the Intelligence Cycle We are all familiar with the Intelligence Cycle as originally put out by the US Department of Defense. A common complaint is that it is too abstract to serve as a basis for a CTI practice. For this reason, we provide practical recommendations per phase of the Intelligence Cycle.
Planning and direction It’s easy to hoard a firehose of whatever data is available, free feeds or the daily stream of reports. This also is a sure-fire way to an expensive SIEM bill for your SOC, and cognitive overload for your analysts. Consider only collecting data against pre-set goals based on stakeholder requirements, optionally formally established and signed-off as (Priority) Intelligence Requirements. Examples are threats specific to network infrastructure components or unique to your organisation’s industry vertical, which may be collected via feeds or reports. In case you decide to procure data to fulfil one or more requirements, be aware that because of sharing agreements between different vendors, you could end up paying for the same intelligence twice 2. Inform yourself about the origin of data upfront. From what type of sensors is the data sourced, where are these located? How much data is unique proportionally to what you’re already collecting? What is its half-life according to the provider?
Collection For collection, many think primarily of feeds, but the most popular OSINT feeds have been shown to have more than three weeks’ delay reporting IoCs and to be highly biased due to specific collection methods of the supplier 3. This implies that if you are collecting feeds as part of an early-warning strategy, the value of feeds is more limited than you think. Also the quality of commercial data has been questioned as vendors have very scarce coverage of the threats on which they focus 4. OSINT is still a viable option, but there is more than feeds. A Twitter scraper like Twint or a web scraper such as Scrapy can allow for more targeted and timely collection on specific ATT&CK techniques or CVEs that are a concern to your organisation.
Processing and exploitation Most of the challenges with processing are technical, centring on parsing and manipulating technical data into a supported format, or separating signal from the noise you might get from scrapers used to collect data from social media and other internet sources. When catering to an SOC, ideally CTI is only fed into an SIEM after analysis. In practice, processing and exploitation is often automated aggregation, parsing and enrichment, which doesn’t receive much reconsideration. But this phase is key to maximising the success of your CTI effort if you think about the half-life of indicators to avoid false positives. Deprecation should be source- and indicator-dependent. Who was the reporting source? And where does the indicator point? If it is cloud infrastructure or a short-lived DGA domain, the half-life is generally shorter. When deciding about automatic deprecation based on half-life, also consider versioning. Are you able to roll back and assess impact in case automatic deletion goes awry?
“Be aware of what we call the false negatives gap. You want to have an approximation of the parts of your threat landscape that aren’t visible to you”
Analysis and production Analysis is the hardest nut to crack. Many are vocal in their opinions when it comes to analysis, but it is tough to give practical guidance as it is a custom in-house process. Many conference talks provide lip service to Structured Analytic Techniques, a concept from the Intelligence Community.
But don’t be fooled by using standard methodology, because recent research shows mixed evidence on the reduction of confirmation biases and even shows it may increase (!) judgment inconsistency and error 5. Whatever methodology you use, it needs to fit your organisation’s internal processes, asset landscape and response strategy. Whatever methodology you use doesn’t really matter, as long as it is applied consistently by analysts, as consistent output is central to methodology. This is a question of training, which also includes making analysts familiar with the asset landscape, infrastructure design and response processes.
For strategic analysis, be aware that many of the freely-available commercial reports are potentially useful, but are produced primarily for advertising purposes, which has been shown to introduce systemic bias 6. Bias is almost implicit in specific collection techniques, but it isn’t harmful per se. You can have bias work for you by employing it as a deliberate focus. Depending on your threat landscape, you might like to increase focus by prioritising or sunsetting biased sources.
If you like to use one parameter to determine whether to pivot further during analysis, look for prevalence. Many reports exist, but it is often unclear whether they are based on single events or already widespread. Based on this, you can look into other parameters such as targeting, victims and impact. Furthermore, be aware of what we call the false negatives gap. You want to have an approximation of the parts of your threat landscape that aren’t visible to you. We tend to focus on false and true positives in the raw data we collect, but also be aware of what you do not see and how this implicates your CTI, but also your alerting strategy. You might like to prioritise threat hunting efforts to focus on these indicator twilight zones.
Dissemination and integration If you want your SIEM to function as the single pane of glass it is supposed to be, good analysis is critical prior to dissemination to avoid false positives or alert overload. Be especially careful with adding CTI to your Security Orchestration, Automation and Response (SOAR) solution. Make sure it is analysed properly, as due to an SOAR’s integration and automated workflow with other tools, these solutions are force multipliers of false positives. In this case, running simulation tests with actual CTI datasets makes even more sense.
For dissemination of CTI to external partners, research proves what most of us observe in practice: sharing is scaring. The transaction cost of engaging, establishing quality and accuracy of shared data and risks of violating privacy and anti-trust laws are common barriers 7. This where the primary indicator of successful participation in sharing is perceived value and reciprocity belief 8. Privacy-enhancing technology such as zero-knowledge proof can however serve as an enabling solution to these issues.
Evaluation and feedback
This final step is a cost-benefit analysis that also evaluates how to increase the benefits. The total cost of ownership (TCO) of CTI must be compared against the return on investment (ROI) of efforts such as a Threat Intelligence Platform (TIP) and commercial feeds. Just like security in general, CTI is an insurance policy against bad stuff. Though quantifying its returns can prove challenging, reduction of Mean Time To Detect (MTTD), Mean Time to Respond (MTTR) and overall dwell time can serve as parameters here.
On a daily basis, the Key Performance Indicator (KPI) of strategic CTI efforts is whether it influenced or helped with the decision at hand. For IoCs, this a question of true-/false-positive ratio. Expensive commercial feeds have a higher TCO than OSINT feeds, thus they should be judged differently. If OSINT is the meat and potatoes, commercial feeds must be the cream of the crop. Their cost can be justified by their focus on actors and TTPs that matter to you specifically. If they don’t, consider what their added value is. At the end of the day, talking with industry partners and competitors is still free. Potentially high transaction costs mean it is not cheap, but it can yield highly relevant results.
On our lab’s Publications page, full versions of other CTI-related work can be found.
This article was originally published in One Magazine 2021
https://doi.org/10.1080/08850607.2020.1780062 ↩
https://academic.oup.com/cybersecurity/article/4/1/tyy008/5245383 ↩
https://www.cyber-threat-intelligence.com/publications/ACNS2019-feedtimelineness.pdf ↩
https://www.usenix.org/conference/usenixsecurity20/presentation/bouwman ↩
https://onlinelibrary.wiley.com/doi/full/10.1002/acp.3550 ↩
https://www.tandfonline.com/doi/full/10.1080/19331681.2020.1776658 ↩
https://dl.acm.org/doi/abs/10.1145/3339252.3340528 ↩
https://www.sciencedirect.com/science/article/pii/S074756322030011X#sec5 ↩
Together with Christian Doerr, I have written a position paper arguing that CTI is a valuable addition to the cyber security field, but it is still in its infancy. Because of that, it often delivers flawed analysis. If you’re a CTI analyst, you might argue this is all pretty obvious stuff, but it isn’t. From a scientific perspective, most of our practice finds little empirical support and much of what we take as ‘facts’ is really not more than an opinion.
The paper was peer-reviewed by the International Journal of Intelligence and CounterIntelligence and has been published today. It is released as Open Access, so no annoying academic paywall and available to read here. It includes an overview of what CTI is, its origination and the challenges and potential it holds. As usual for a position paper, pointing out the field’s challenges serves to identify areas for future research. CTI is rich in tools and technological knowledge, but poor on standardization and methodology. A better product follows a better process.
Below a quick rundown of the challenges identified in our analysis, which are discussed in more length in the paper:
Needless to say the above is a clickbaity representation of the more nuanced discussion in our paper, available for everyone here. For the lab, this is where we hope to contribute with our research in the coming years. Because as indicated in the latter part of the paper, it isn’t all that bad as the CTI field has already contributed much to cyber security and remains one of the most promising concepts for the years to come. Yet we improve through critical introspection and self-examination, not through praise and worship of established phenomenons.
Davis, Jack. Sherman Kent and the profession of intelligence analysis. Central Intelligence Agency Washington DC, 2002. ↩
Heuer, Richards J. Psychology of intelligence analysis. Center for the Study of Intelligence, 1999. ↩
Heuer Jr, Richards J., Richards J. Heuer, and Randolph H. Pherson. Structured analytic techniques for intelligence analysis. Cq Press, 2010. ↩
Dhami, Mandeep K., Ian K. Belton, and David R. Mandel. “The “analysis of competing hypotheses” in intelligence analysis.” Applied Cognitive Psychology 33.6 (2019): 1080-1090. ↩
Bianco, David. “The pyramid of pain.” Enterprise Detection & Response (2013). ↩
But first a little rant on non-cyber, but equally important operational security. Not beating around the bush: it really amazes me how many people still openly tell how many Bitcoin they’re holding. They even seem to like this, while most of the (same) people are pretty reluctant to disclose the amount of dollars or euro’s in their bank account or the cumulative value of their stock equity portfolio. Of course this is common sense, which is why I don’t get that many people are still bragging at birthday and office parties about the number of full Bitcoin they’re holding. Just don’t do it. If you want to have a better reason, take a look at Jameson Lopp’s Github repo, where he keeps track of all physical attacks to steal Bitcoin.
Storing your Bitcoin on a hardware wallet is simply the go-to security recommendation for most people. Sure, storing your Bitcoin on your own full node can be more secure and offer more functionality, but you’ll have to appreciate to get a bit geeky at times, as your node needs to be kept up to date. Optionally, look at multisig wallets such as Electrum or Carbon Wallet. While your at it, create a new address for each transaction. This is a functionality most hardware wallets have now.
Overall, it makes sense to opt for a hardware wallet of which the software is open source, because this increases the auditability of the software. Based on this, any Trezor is preferred above Ledger, as Ledger is based on proprietary software. This means Ledger’s wallet software is effectively a black box and its security can only be audited on behalf of Ledger. You think Trezor’s limited altcoin support is a downside? If you want to get scammed through shitcoins, why store them properly anyway? ;) Regardless of the hardware wallet you use, store it in a safe place.
Many people are still carrying their wallet on their keyring in the assumption this is safe as one still needs a BIP39 passphrase in addition to physical access. This is true, but side-channel style attacks are reported for most hardware wallets such as this one for the Trezor and this one for the Ledger.
It makes good sense to use an ad-blocking browser extension. uBlock Origin would be my go-to recommendation here, as it is compatible with most browsers. While the revenue model of many websites run on advertising income, over the years online advertisements have also been used to serve malware and to invade user privacy unnecessarily. From the Binance hack last summer we know that users were attacked through Google Ads that were served on top of the Google result for the actual Binance login page. These ads linked to phishing login pages that captured the user’s credentials, after which the attackers could exfiltrate funds from their Binance custodian wallet.
If you want to take ad-blocking a step further, you can also take this out of the browser and save precious CPU cycles by installing Pi-hole or AdGuard Home on a Raspberry Pi. As these solutions live in your network, they block ads for all your network devices and also allow for additional privacy through DNS over HTTPS.
Apart from a trustworthy adblocker, be highly selective towards the browser extensions you choose to use. It is good security practice to only enable those you really use. Countless stories of users of malicious Trezor browser extensions who have lost money have passed my Twitter feed, like this recent one. Even if you’re using an array of browser extensions of which you’re pretty sure they’re safe, each of them introduces an additional attack surface through common bugs such as XSS vulnerabilities.
Over the past years, many popular services have been breached, through which credentials of hundreds millions of users have been made available online. On Have I Been Pwned, you can check if your account got compromised through one of the breaches that have been publicly disclosed. In most cases this means that the combination of your username and password for this service is now available to bad actors. For people who recycle their passwords with very limited variation throughout multiple services, you now have a higher risks to have your other accounts breached as well. In anyway you need to be cautious, as this is juicy stuff for bad actors as Brian Krebs has explained perfectly.
So if you want to take an additional step towards increasing your Bitcoin security, create a dedicated email account for your Bitcoin related stuff. This significantly decreases the risk of collateral financial damage if one of your social media accounts gets hacked. If you want to set yourself up for the future, create an account with ProtonMail or StartMail, which are known not to practise upon your privacy.
This recommendation goes hand in hand with the one above. Never re-use passwords and make them so complex that you can’t remember them. For this, you can use a password manager such as KeePass (local) or BitWarden. Of course, also choose a strong master password.
It is also recommended to enable second-factor authentication at all services supporting this. Usually this means you will need to enter a TOTP password that gets generated through an algorithm supported by apps such as Google Authenticator. In addition to this, if you already own a hardware wallet that supports U2F, you can also use that device to do this. Please be very cautious at using SMS codes as a second authentication step. Some telecom operators are known to be vulnerable to SIM swapping. If you’re with one of those operators, this means an impersonator can request a new SIM card for your phone number, after which he might be able to control your online accounts as some websites use your phone number for identification.
The use of Linux for your interaction with any Bitcoin-related services will drastically decrease your attack surface. While Windows is a perfect OS if you keep it up to date, an installation with many user-installed applications is less secure than a Linux distro with only applications from the second-party respository (e.g. Main in Ubuntu, which is maintained and supported by Canonical). This doesn’t need to be a drastic overhaul: most distros can be run ‘live’ from USB storage. This is ideal to boot up for your Bitcoin stuff. If you want to take this a step further, you can also boot up Tails to browse via Tor.
Is using a VPN service safe? Well, this depends on your threat model. Out of privacy concern, many people think they are better of surfing via a VPN service that costs them a few bucks a month. The competition in the VPN market is fierce, so the prices are unrealistically low. I would argue here that when it looks to good to be true, it also is. Your ISP, especially if you live in Europe, is bound to strict privacy regulations such as the GDPR. Many VPN providers show off with servers located in DCs in plenty countries internationally, but regardless of the country you connect to, you aren’t routing your DNS traffic with your ISP anymore, but through a shady entity usually located legally in jurisdictions known as tax havens, not known for their excellent privacy legislation.
Of course this is different if you’re interacting with your Bitcoin via wifi. At least make sure the service you’re using is encrypting traffic with a SSL/TLS certificate (the well-known green padlock symbol in your browser’s address bar). You can enforce this by using the HTTPS Everywhere browser extension.
Each security measure you take affects ease of use. It is up to you which tail risks you judge acceptable. This normally depends on how many Bitcoin or satoshi’s you’ll have to secure and their significance towards the rest of your capital. But as a rule of thumb, never believe in something that is too good to be true. For example, only last week it appeared that quite some users had been lured into using a fake QR code generator for Bitcoin. This relatively simply deployed scam proved to be quite lucrative. Treat fancy tools like these like black boxes. They are abstraction layers, which are a potential risk: if you don’t know what’s going on internally, don’t use it.
]]>Cryptocasters! Het valt voor @Misssbitcoin en @hmblank @BNR niet altijd mee een podcast te maken vanuit huis, met gasten op afstand. Maar toch weer mooi als het lukt. Met @f00th0ld, die onderzoek deed @tudelft naar gehackte #exchanges. https://t.co/qGBgxANcZm @bitcoin
— Cryptocast (@CryptocastNL) April 2, 2020
Today I was a guest on the Cryptocast, a podcast primarily on Bitcoin by Dutch radio station BNR Newsradio. Regularly episodes are recorded in the BNR studios, but this one ran remotely due to the social distancing measures currently in force to tackle COVID-19.
During the show we go over some TA (technical analysis) on the current Bitcoin price, this week’s crypto news, such as Binance’s M&A of Coinmarketcap and go into some of the nitty gritty of my recent research paper, which is available for download here.
You can listen to the podcast via regular podcatchers such as Apple Podcasts or stream the audio from BNR’s website. This one has the best audio quality. It is also on YouTube, which includes the Zoom gallery view and some screenshares, but worse audio:
]]>One can argue this is all fine. The money on your bank account is also not truly your money, but only a claim against your bank. These centralized exchange platforms have essentially become the banks of Bitcoin. What is wrong with that, you might ask. The problem is that you should then expect these parties to implement bank-level security, but they don’t. And when things get ugly, this might leave you empty-handed. Which has happened many times: remember the Mt. Gox hack?
We have looked at a corpus of 36 cyber security breaches of centralized Bitcoin exchange platforms over the last years. We have analyzed how these platforms were breached and also how this compares to security breaches of ‘traditional’ financial institutions. Yes, that is a bold comparison for a Bitcoin maximalist and purist. But as what has become of these big centralized platforms opposes the Bitcoin philosophy so much, they must at least be trustworthy custodians. Of course this is exactly why Trace Mayer launched Proof of Keys1. Not your keys, not your Bitcoin2. On the upside, our research shows that things have gotten better over the last couple of years, but there is still room for improvement. The amount of funds that get exfiltrated using relatively low-level attack techniques, is unique to this ecosystem. But of course none of this would be feasible if people move their newly acquired BTC directly to a privately managed means of storage. If everyone acted accordingly, would centralized exchange platforms still exist?
This was a fun, quick analysis to pull off, which allowed me to tinker with the VERIS framework and the VERIS toolbox a bit. The paper got accepted into IEEE’s International Conference on Blockchain and Cryptocurrency, which will take place next month. If you’re interested in reading the full paper, it is provided here. Strong hands!
https://www.proofofkeys.com
]]>To introduce ATT&CK to the scientific community as a useful standard, its usefulness needs to be proved in a scientifically sound way. I did this over the last couple of months, in which I have analyzed a labeled sample of 900+ unique families of Windows malware from 2003 until 2018 (thanks to Daniel Plohmann’s Malpedia). This provides overview of established techniques within Windows malware and techniques which have seen increased adoption over recent years. Within the dataset, I have observed an increase in various techniques such as fileless execution, discovery of security software and DLL side-loading. A nice observation is that a (formerly) sophisticated technique, command and control (C&C) over IPC named pipes, is getting adopted by less sophisticated actor groups. Malware authors are innovating techniques in order to bypass established defenses (doh).
I wrote up the analysis results in a paper, using the uniform language offered by ATT&CK. This all turned out quite nice, as this paper is accepted into the 15th International Conference on Security and Privacy in Communication Networks, which will be held in Orlando this October. A few days later I will also be presenting the results during ATT&CKcon, Mitre’s ATT&CK-focused conference.
So, where’s the beef?
All of this results in the following graphical representation of most commonly implemented malware techniques (click for high-resolution version). For those interested in reading the full paper, it is available for download here. If you have any questions or remarks, just get in touch on the socials or via the contact form. I am happy to hear from you.
]]>