Breaching a CA – Blind Cross-site Scripting (BXSS) in the GeoTrust SSL Operations Panel Using XSS Hunter via /r/netsec


Breaching a CA – Blind Cross-site Scripting (BXSS) in the GeoTrust SSL Operations Panel Using XSS Hunter
http://ift.tt/2ccQ5mr

Submitted August 31, 2016 at 08:36PM by chloeeeeeeeee
via reddit http://ift.tt/2bDn3xL

Advertisements

Stealing all browser data (passwords,history etc.) from Yandex Browser by exploiting a CSRF in the Yandex’s sync system via /r/netsec


Stealing all browser data (passwords,history etc.) from Yandex Browser by exploiting a CSRF in the Yandex’s sync system
http://ift.tt/2cf1bdw

Submitted August 31, 2016 at 06:29PM by appsecit
via reddit http://ift.tt/2bCVFzU

Enabling OSINT in Activity Based Intelligence (ABI)

Activity Based Intelligence, or ABI, is an intelligence methodology developed out of the wars in Iraq and Afghanistan used to discover and disambiguate entities (e.g., people of interest) in an increasingly data-rich environment (most of it unclassified and open source). It is geospatial in nature, because it seeks to link entities and events through their locations, rather than by text.

ABI has four main ideas — or pillars — which form the basis of how to understand and use data to discover unknowns.

In their ground-breaking book, Activity Based Intelligence: Principles and Applications, Vencore Director of Analytics, Patrick Biltgen and my good friend and former colleague, Stephen Ryan summarize the four pillars as follows:

Georeference to Discover: focusing on spatially and temporally correlating multi-INT data to discover key entities and events; Data Neutrality: the premise that all data may be relevant regardless of the source from which it was obtained; Sequence Neutrality: understanding that we have the answers in the data collected at any time to many questions we do not yet know to ask; and Integration Before Exploitation: correlating data as early as possible, rather than relying on vetted, finished products (from single INT data), because seemingly insignificant events in a single INT maybe be important when integrated across multiple INTs.

In his keynote speech at GEOINT 2016, the director of NGA, Robert Cardillo, stated that his challenge to NGA is to succeed in the open. Mr. Cardillo also called for the rejection of “outdated ideas about the value of open source data.” ABI analysts have long rejected those ideas and demanded better access to OSINT because we adhere to the pillar of Data Neutrality.

We KNOW that the web offers a wealth of information, but heretofore, its size and scale presented a number of challenges to an analyst, namely that data from the web is unstructured, vast, and lacks context, making it difficult to collect and process. After overcoming the issue of accessing the world wide web safely, the next question we faced was, “where do I even start?”

In this blog post, I will focus on how Recorded Future complements the Data Neutrality pillar through structured open source intelligence, or OSINT.

How Recorded Future Structures the Web

Recorded Future is inherently data neutral, as we value the intelligence that we glean from the breadth of our coverage. Our intelligence engine harvests data from over 750,000 (and growing) sources of data — all unstructured text — in the open, deep, and dark web.

This data is then given structure by the automated creation and recognition of entities and events — terms all ABI analysts understand — which can be anything that we want to discover, understand, and resolve.

Of note, in Recorded Future, these terms are broader than in the traditional ABI lexicon, as they include proxies, locations, and transactions (such as Twitter handles, threat actor groups, or locations in the geopolitical realm as well as things like IP addresses connected to domains, phishing emails delivering malware, and exploits in the cyber domain).

When Recorded Future ingests a reference from the web (e.g., something that somebody posts on the internet, whether via Twitter, information security blog, or forum) it catalogs that data point around the entities and/or events. We accomplish this through machine learning and natural language processing — meaning that collection and processing of data is automated.

What this does is not insignificant; first, it takes the burden of collection and processing of data off of the analyst (which I can tell you from experience can take an inordinate amount of time and bandwidth). Second, it creates an ever-increasing pool of data points of which an analyst can query for and be alerted to specific information. Queries like these might include:

  • “Give me all domains ever used with X piece of malware.”
  • “Show me all tweets with negative sentiment within a one-kilometer radius of X location.”
  • “Show me all tweets and foreign news reports referencing X military equipment and specific hashtags throughout X location.”
  • “What are the latest zero-day exploits being discussed in criminal forums.”

Anonymous Hunting

Finding this information quickly, persistently, and comprehensively through traditional internet search engines or from a handful of favorite open source sites is nearly impossible.

Recorded Future mitigates this challenge for the analyst, enabling access to the wealth of information safely and efficiently, through cloud technology, data encryption, two-factor authentication, and decoupling user information.

This means that an analyst can be on the unclassified web — a must for truly utilizing OSINT’s potential — and do so comfortably knowing that one’s presence and searches are protected.

Dark Web Sources

Let’s not gloss over Recorded Future’s coverage of deep and dark web sources.

There is a wealth of information in these areas (such as black markets and criminal forums) that any standard internet search engine, or OSINT analyst for that matter, is unable to access. The “chatter” on these sites holds myriad clues for analysts that could potentially connect the dots in a variety of intelligence issues. To then enable analysts access to this kind of information without having to actually go to these sites is nothing short of revolutionary.

Multiple Languages

You might be thinking, what if these sites are in foreign languages?

Recorded Future has you covered with our natural language processing, or NLP. Currently, we natively process data in seven languages — English, Spanish, French, Russian, Farsi, Chinese, and Arabic — with two more languages on the horizon. This means that Recorded Future understands what is being discussed and can pick up threat information in these languages.

Furthermore, we provide a mechanism for in-platform translation, so if you see a reference written in Chinese, you don’t have to go out to Google Translate, you can simply click the Translate button right from within the platform.

Multi-Year Archive

In a nod to Sequence Neutrality, where the answers to our intelligence questions might be held in the data we collected previously, Recorded Future maintains a repository of six years’ worth of data. This allows an analyst to query historical data when another data point leads him or her there, and potentially find the key to unlock previously unanswered questions.

Finally, in response to Mr. Cardillo’s challenge to the companies showing at GEOINT to offer more trial accounts and API keys, Recorded Future provides no-cost “pilots” for prospective clients and the ability to purchase an API token to pull in data.

How would this look? In the most traditional interpretation, structured, georeferenced data would be pulled from Recorded Future’s data repository and incorporated into a single GIS framework — such as ArcGIS — for correlation with data from other “INTs.”

Conclusion

This sort of access to all parts of the web that I have described above has never before been possible, which is why I am so excited about this technology.

Fortunately, I was able to represent Recorded Future at GEOINT 2016 and explain to GEOINT officers how our technology enables ABI analysis.

ABI requires access to all available sources of data; access to OSINT is a mandate for today’s threat intelligence capability. Analysts must be able to observe human activities, networks and relationships, and events and transactions across all domains of the operational environment. Recorded Future is an enabling technology — one that provides analysts the access to structured data on the open, deep, and dark web. Indeed, as those outside of government begin to understand this methodology, the technologies that enable analysts such as those developed by Recorded Future will be key to success across industry.

The recently announced partnership between Recorded Future and Vencore will “leverage the OSINT collection capabilities of Recorded Future in support of Vencore’s mission to support and integrate technologies, tools, and data sources” in support of ABI and other advanced analytics like Object Based Production, or OBP.

Stay tuned for my next blog post about how Recorded Future complements OBP!

LZX

LZX is an ABI subject matter expert, having been a practitioner as well as an adjunct professor for the ABI 101 course.

The post Enabling OSINT in Activity Based Intelligence (ABI) appeared first on Recorded Future.

    

from Recorded Future http://ift.tt/2bRbr83

via IFTTT

Rio Olympics Take the Gold for 540gb/sec Sustained DDoS Attacks!

Rio Olympics Take the Gold for 540gb/sec Sustained DDoS Attacks!

http://ift.tt/2bBvE5C

PastedImage

by Roland Dobbins, Principal Engineer & Kleber Carriello, Senior Consulting Engineer

When organizing a huge, high-profile event like the Olympics, there are always chances for things to go wrong – and, given human nature, we tend to simply accept it as a given when things go as planned, and to notice and highlight difficulties in execution.

A great deal has been written and spoken about the challenges facing the organizers, sponsors, and contestants in the 2016 Rio Olympics. And if we think about it, we can extrapolate potentially thousands of potential pitfalls and difficulties which accompany any event of similar complexity.

Success is Blasé

We’ve come to view Internet applications and services in much the same way. When they’re working well, we don’t even notice how amazing it is that we’re able to instantly view live streaming video of the Olympic competitions, along with scores and expert commentary, pretty much anywhere on the globe, on our computers, smartphones, and tablets. But if we somehow can’t get access to the latest and greatest content and information instantly – and share it and discuss it online with our friends – then we become intensely frustrated and vocal with our displeasure. The uninterrupted availability and resiliency of online information services, apps, data, and content is now de rigeur for sporting events of any size, at scale. This is manifestly true for the Olympic Games.

Yes, the Rio Olympics experienced – and largely overcame – significant challenges which at times seemed almost insurmountable. Many problems, some of them factual, some of them less so, have been described and discussed and dissected in excruciating detail.

Even before the opening ceremonies began, public-facing web properties and organizations affiliated with the Olympics were targeted by sustained, sophisticated, large-scale DDoS attacks reaching up to 540gb/sec – directed towards public-facing properties and organizations affiliated with the Olympics. While many of these attacks were ongoing for months prior to the start of the Games, attackers increased their efforts significantly during the actual Olympics themselves, generating the longest-duration sustained 500gb/sec-plus DDoS attack campaign we’ve observed to date.

And nobody noticed.

This is the sine qua non of DDoS defense – maintaining availability at scale, even in the face of skilled, determined attack. And just like the countless other services we rely upon every day such as electricity, fresh water, transportation, and emergency services, the ultimate metric of success is that the general public can go about their business and pursue their interests without ever knowing or caring that titanic virtual struggles are taking place in the background.

By any metric, the Rio Olympics have set the bar for rapid, professional, effective DDoS attack mitigation under the most intense scrutiny of any major international event to date. And did we mention that the attacks ranged up to 540gb/sec in size?!

An Ongoing Attack Campaign, Expanded

PastedImage[1]

Over the last several months, several organizations affiliated with the Olympics have come under large-scale volumetric DDoS attacks ranging from the tens of gigabits/sec up into the hundreds of gigabits/sec. A large proportion of the attack volume consisted of UDP reflection/amplification attack vectors such as DNS, chargen, ntp, and SSDP, along with direct UDP packet-flooding, SYN-flooding, and application-layer attacks targeting Web and DNS services. The IoT botnet utilized in most of these pre-Olympics attacks was described in detail in a recent weblog post by our Arbor ASERT colleague Matt Bing. This very same botnet, along with a few others, was also used to generate the extremely high-volume (but low-impact, thanks to the efforts of the defenders!) DDoS attacks against an expanded list of targets throughout the 2016 Rio Olympics.

One of the characteristics of information security in general, and DDoS defense in particular, is that we see new attack methodologies pioneered by more skilled attackers and used sporadically for years (and sometimes decades) before they’re ‘weaponized’ and made more broadly available to low-/no-skill attackers via automation. We’ve encountered various types of high-volume/high-impact reflection/amplification attacks since the late 1990s; and then, 3 1/2 years ago, they suddenly became wildly prevalent due to their inclusion in the arsenal of DDoS botnets-for-hire and so-called ‘booter/stresser’ services. This has led to a highly asymmetrical threat environment which favors even the most unskilled attacker due to the fact that these Internet ‘weapons of mass disruption’ are now available to the masses via a few mouse-clicks and a small amount of Bitcoin. We’ve seen this pattern repeat itself over and over again, with disparate groups of miscreants totally unaffiliated with one another independently rediscovering more sophisticated attack mechanisms, and then proceeding to weaponize them with nice GUIs and even 24/7 online ‘customer’ support!

Everything Old is New Again

For the relatively small number of people who have a reason to think about how the Internet actually works, the only protocols they tend to remember are TCP, UDP, and ICMP. Since those protocols represent by far the largest proportion of Internet traffic, little if any thought is given to other IP protocols.

In reality, there are 256 Internet protocols, numbered 0-255. TCP is protocol 6, UDP is protocol 17, and ICMP is protocol 1. On the IPv4 Internet, only 254 of those protocols should ever be observed – protocol 0 for IPv4 (but not for IPv6!) is reserved, and should never be utilized, even though routers and layer-3 switches will happily forward it along. Protocol 255 is also reserved; most routers and switches won’t forward it. Of the set of less-familiar IP protocols, Generic Routing Encapsulation (GRE), used for unencrypted ad-hoc VPN-type tunnels, is protocol 47.

Starting in late 2000, we began to observe more skilled attackers occasionally using these lesser-known protocols in DDoS attacks – almost certainly in an attempt to bypass router ACLs, firewall rules, and other forms of DDoS defense which were configured by operators who only took TCP, UDP, and ICMP into account. In many cases, these attacks initially succeeded until the defenders finally inferred what was going on, generally via analysis of NetFlow telemetry using collection/analysis and anomaly-detection systems such as Arbor SP.

PastedImage[2]Example crafted GRE DDoS attack packet.

And now we’ve seen those same attack techniques rediscovered, weaponized and utilized during the Rio Olympics. In particular, significant amounts of GRE DDoS traffic was generated by the attackers; this ‘new’ attack methodology has now been incorporated into the same IoT botnet referenced above. As with all ‘new’ types of DDoS attacks the miscreants stumble upon, we expect to see other botnets-for-hire and ‘booter/stresser’ services adding GRE to their repertoires in short order.

We also observed uncomplicated, high-volume packet-floods destined for UDP/179. As most (not all) UDP reflection/amplification attacks tend to target UDP/80 or UDP/443 in order to confuse defenders who might not notice that the attackers are using UDP instead of TCP (TCP/80 is typically used for non-encrypted Web servers, and TCP/443 for SSL-/TLS-encrypted Web servers), we believe the attackers were attempting to masquerade an attack on the BGP routing protocol used to weave Internet-connected networks together. BGP runs on TCP/179; the irony is that one of the few best current practices (BCPs) actually implemented on a significant proportion (not all!) Internet-connected networks is to use infrastructure ACLs (iACLs) to keep unsolicited network traffic from interfering with BGP peering sessions.

DDoS Defense Gold – It’s All About Teamwork, Especially at the Olympics

The defenders of the Rio Olympics’ online presence knew they’d have their work cut out for them, and prepared accordingly. A massive amount of work was performed prior to the start of the games; understanding all the various servers, services, applications, their network access policies, tuning anomaly-detection metrics in Arbor SP, selecting and configuring situationally-appropriate Arbor TMS DDoS countermeasures, coordinating with the Arbor Cloud team for overlay ‘cloud’ DDoS mitigation services, setting up virtual teams with the appropriate operational personnel from the relevant organizations, ensuring network infrastructure and DNS BCPs were properly implemented, defining communications channels and operational procedures, et. al.

And that’s why the 2016 DDoS Olympics were an unqualified success for the defenders! Most DDoS attacks succeed simply due to the unpreparedness of the defenders – and this most definitely wasn’t the case in Rio.

The stunning victory of the extended DDoS defense team for the 2016 Rio Olympics demonstrates that maintaining availability in the face of large-scale, sophisticated and persistent DDoS attacks is well within the capabilities of organizations which prepare in advance to defend their online properties, even in the glare of the international spotlight and an online audience of billions of people around the world. The combination of skilled defenders, best-in-class DDoS defense solutions, and dedicated inter-organizational teamwork has been proven over and over again to be the key to successful DDoS defense – and nowhere has this been more apparent than during the 2016 Rio Olympics.

APT

via Arbor Threat Intelligence http://ift.tt/1pBMqDx

August 31, 2016 at 08:58AM