Author: Bozidar Spirovski

  • Choosing a Disaster Recovery Center Location

    When preparing a Disaster Recovery Center, one of the most important decisions is the location of the location of the Disaster Recovery Center. Up until the 9/11, a lot of companies held their DR centers in the adjacent building, and right after 9/11, everyone wanted to go as far from the primary data center as possible.


    One of the common misconceptions of Disaster Recovery planning is that longer distance ensures better disaster protection. Of course, increasing the distance between data centers reduces the likelihood that the two centers are affected by the same disaster. But just putting distance between locations may not be sufficient protection. In reality, the best distance for a DR location is dictated by a multitude of factors:

    • Minimal parameters dictated by regulators – certain businesses, especially telco and finance must maintain regulatory compliance. It is not unusual for regulators to mandate minimal distance between the primary and the Disaster Recovery location. You must comply to these parameters
    • Corporate RTO parameters – the company has decided that the Disaster Recovery Center must be up and running within the time defined as RTO – Recovery Time Objective. This time will include the travel time to Disaster Recovery center and the system activation times. So it is always important to take this parameter into account when choosing a Disaster Recovery site
    • Telecommunications services – larger distance between the primary and DR site means higher telecommunication costs and limits the choice of appropriate remote copy technology. For instance, synchronous replication is still very difficult to achieve past the 40km mark. Choose a location that is sufficiently distant but still manages to deliver the required bandwidth for the chosen replication/remote copy technology
    • Geophysical conditions -In order to avoid a natural disaster, it is not always sufficient to move your Disaster Recovery center to a specific distance from the primary center. Most natural disasters deliver high impact in areas which support their spread by terrain configuration or other geophysical conditions. For instance, a safe hurricane impact distance was considered 150 km. However hurricane Katrina lost strength after over 240 km inland since there was no terrain feature to stop it. Best location should be in a separate flood basin, off a seismic fault line (or at least on a different one) and with a large mountain between the primary and the DR site
    • Means of Transportation – increased distance between primary and DR site may make it difficult for employees to travel to the recovery site. This is especially true in situations of crisis, when roads may be damaged or blocked, or public transport is stopped by strikes. Choose a site that has multiple travel options – railroad, motorway, even river boat
    • Vicinity of Strategic objects – It is never smart to place your Disaster Recovery center in the vicinity of objects of strategic importance to the country. Such locations are prone to terrorist attacks, and attack by opposing forces in a military conflict. Also, even in situations of natural disasters, strategic locations will have strong military presence that may limit access to your Disaster Recovery center. Strategic objects are military bases, airports, refineries and oil depots etc. Choose a safe distance from such locations

    There is no such thing as an ideal Disaster Recovery location. The optimal location is the one that minimizes the risks at an acceptable cost and meets the required SLAs and authorities’ regulations.

    Talkback and comments are most welcome

    Related posts
    Mitigating Risks of the IT Disaster Recovery Test
    iPhone Failed – Disaster Recovery Practical Insight
    Business Continuity Analysis – Communication During Power Failure
    Business Continuity Plan for Brick & Mortar Businesses
    Example Business Continuity Plan For Online Business

  • Fuzzing with OWASP’s JBroFuzz

    I decided to search out a good web fuzzer for some testing needs. I wanted a fuzzer that was capable, customizable and could support my testing. The last thing I wanted was some sort of all-in-one application security scanner (since the false positives can just get ridiculous at times). Nope, all I needed was some automation assistance.

    First thing a simple definitio: Fuzzing or Fuzz testing is a software testing technique that provides invalid, unexpected, or random data to the inputs of a program. If the program fails (for example, by crashing or failing built-in code assertions), the defects can be noted.

    I came across OWASP’s JBroFuzz and think I’ve found a good match. The tool provides a variety of brute force options and includes some nice graphing and statistics to analyze the information. I was also happy to see some nice documentation so I could quickly get up and running. My only compliant at the moment is that the proxy setup is a little clunky and not-intuitive at first. But again, as long as you follow the guide, it shouldn’t be an issue.

    When do I plan to use this new found fuzzer?
    1. Sites where I don’t have source for some reason. This is actually a rarity. If you want someone to assess the security of your web app, you should really give them the source code. Quick aside: if the consultants you select for an assessment aren’t asking for source code, an alarm should go off in your head. If they don’t do source code analysis, then they aren’t doing there job.

    2. When a site relies heavily on complex regular expressions for input validation and has weak output encoding. Yes, we can make the argument straight away that this is an issue. But its very powerful to make your case with a working exploit. Otherwise, you are trying to justify a bug fix to an issue that may or may not be currently exploitable. This can be a tough sell if developers are heavily leveraged with feature enhancements, new functionality, upcoming releases, etc.

    This is a guest post by Michael Coates, a senior application security consultant with extensive experience in application security, security code review and penetration assessments. He has conducted numerous security assessments for financial, enterprise and cellular customers world-wide.
    The original text is published on …Application Security…

    Talkback and comments are most welcome

    Related posts
    Skipfish – New Web Security Tool from Google
    Tutorial – Using Ratproxy for Web Site Vulnerability Analysis
    How To – Malicious Web SIte Analysis Environment
    Web Site that is not that easy to hack – Part 1 HOWTO – the bare necessities
    Checking web site security – the quick approach

  • Microsoft Patch Disclosure – March 2010 Out-of-Band

    March 2010, brings Microsoft an out-of-band patch by Microsoft with a total of ten vulnerabilities.

    MS10-018 – Cumulative Security Update for Internet Explorer (980182)

    The update covers nine privately reported vulnerabilities and one publicly disclosed vulnerability in Internet Explorer. The most severe vulnerabilities could allow remote code execution if a user views a specially crafted Web page using Internet Explorer.

    CVE-2010-0267 – Uninitialized Memory Corruption Vulnerability
    CVE-2010-0488 – Post Encoding Information Disclosure Vulnerability
    CVE-2010-0489 – Race Condition Memory Corruption Vulnerability
    CVE-2010-0490 – Uninitialized Memory Corruption Vulnerability
    CVE-2010-0491 – HTML Object Memory Corruption Vulnerability
    CVE-2010-0492 – HTML Object Memory Corruption Vulnerability
    CVE-2010-0494 – HTML Element Cross-Domain Vulnerability
    CVE-2010-0805 – Memory Corruption Vulnerability
    CVE-2010-0806 – Uninitialized Memory Corruption Vulnerability
    CVE-2010-0807 – HTML Rendering Memory Corruption Vulnerability

    Microsoft rates the Severity of the risk: Critical

  • Mitigating Risks of the IT Disaster Recovery Test

    The IT Disaster Recovery Test as part of the Business Continuity testing is becoming an annual event for most IT departments. It is mandated by a lot of regulators, nearly insisted upon by internal audit and ofcourse a very healthy thing to do.

    But performing the IT DRP test without proper risk management can put your organization at significant risk.


    To put things into perspective, let’s analyze the steps, risks and countermeasures of an IT Disaster Recovery test:

    DRP Test Step Activity Risks Countermeasures
    1. Failure of primary systems In order to perform a disaster situation, the Primary systems need to be caused to fail on some level
    1. Databases not closed properly/damaged due to forced shutdown or forced power failure
    2. Hardware components failing due to forced shutdown or power failure
    3. Spilt-brain cluster due to uncontrolled sequence of failures of servers and storage
    1. Full backup prior to the initiation of the DRP test
    2. Backup components and Vendor presence at ready during the entire test.
    3. Not performing a direct forced shutdown but forcing a network level isolation at the routers
    2. Activation of Disaster Recovery systems Severing any relation between the DR and the primary systems and running the DR systems as temporary primary
    1. Actual failure of primary system during the test
    2. Failure of the primary system while the DR system is concluded to be non-functional
    1. Full awareness of the test of every interested party – business custodians, directors of divisions and top management to initiate the real Business Continuity Plan
    2. Full backup prior to the initiation of the DRP test at DRP site, and full vendor support.
    3. Reconfiguring the user environment Intervening in the end-user environment in a way that will make them use the DR system
    1. Error in reconfiguration which may cause the end-user to input test data into the primary systems
    2. Error in reconfiguration which may cause the primary system to stop functioning.
    1. , 2. Scripted and documented steps of reconfiguration. All steps should be performed by 2 persons – one observing the others actions
    4. Reverting to the primary systems Resuming the primary systems at some level and reestablishing the relation between the DR and the primary systems
    1. Error in reconfiguration which may cause the primary system to stop functioning.
    2. Copying of test data that was input into the DR test system back into the primary location3. Failure of primary systems during resumption
    1. Scripted and documented steps of reconfiguration. All steps should be performed by 2 persons – one observing the others actions.
    2. Fully controlled and documented process of resumption, which guarantees that only the primary system is data master.
    3. Full backup prior to the initiation of the DRP test, Backup components and Vendor presence at ready during the entire test.

    With all these risks, is it more prudent to never perform an IT DRP test? – Absolutely NOT, and here is why:

    • Performing the IT DRP test actually confirms that things are running, and if something breaks, you are much more prepared for the next time.
    • Not performing the test will just make you think everything is great, until the incident occurs. And the incident is just as certain as death and taxes

    So, perform the IT DRP test regularly, but with a whole set of countermeasures for the possible risks which can happen during the test. Of course you will miss some risks, but if you plan for 10 and miss 1 is much better then not planning at all!

    Talkback and comments are most welcome

    Related posts
    iPhone Failed – Disaster Recovery Practical Insight
    Business Continuity Analysis – Communication During Power Failure
    Business Continuity Plan for Brick & Mortar Businesses
    Example Business Continuity Plan For Online Business

  • Internet Marketing – Attracting Good Numbers Of Customers

    In this 21st century, the boom of the Internet medium is offering ample of opportunity to everyone. If you look 10 to 15 years back, then you can know that people were widely using the Internet for chatting, downloading, emailing and grabbing information. Today, people are hugely using the World Wide Web for Internet marketing. Certainly, the Internet marketing has become the buzzword of this millennium. This marketing system is totally different from other types of marketing in which individual have to move the market place to promote or sale products.

    In Internet marketing, all types of advertising and promotion are done right on the online medium. This method of promotion offers increase in sales, traffic and can attract good numbers of customers from all around the world. It has been found that many small and big companies are taking help from good online marketing company, to create their presence. If you are looking forward to hype your sales, then you need to look for some good online marketing company. One of the most important tools in Internet marketing is Search Engine Optimization.
    These days, lots of websites are using SEO technique to boost sales and traffic. There are off-page and on-page search engine optimization techniques that can offer you outstanding results. At present, Internet marketing is also offering good jobs with high pay scale.
    There are hundreds and hundreds of software companies those are providing training on Internet marketing.

    It is true that the rise of online marketing is offering quality jobs that can make your entire dream come true. If you are having a website and thinking to drag good numbers of visitors, then online marketing is a must. There are lots of activities done to promote a website and they are directory submission, article submission, PR networking, social bookmarking and others.

    About the Author:

    This is a guest post from Davide Smith, an author is from SelfTestEngine which is Exam Preparation Tool for IT Certification Exams.

  • Compiling the latest Skipfish for Windows

    Seeing that skipfish releases are changing twice a day, Shortinfosec is starting a persistent post to publish the latest versions of skipfish compiled for Windows.

    Here you’ll find the latest compiled versions, as well as a historical trail of the previous versions

    In order to run it, just unzip the archive – it contains the cygwin run-time libraries needed for running skipfish. The compiled code is tested on Windows 7 and Windows XP Pro

    Download the latest version of skipfish for windows – skipfish 1.26b

    Previous versions

    Download skipfish 1.25b for windows
    Download skipfish 1.22b for windows
    Download skipfish 1.18b for windows
    Download skipfish 1.13b for windows
    Download skipfish 1.11b for windows

    Related posts
    Skipfish – New Web Security Tool from Google
    Ratproxy – Google Web Security Assessment Tool

  • Skipfish – New Web Security Tool from Google

  • Personal data – Publish only what you can afford to get leaked

    The security and privacy risks of social networks were the hot topic of many forums and experts for years. And it appears that the worst fears are now materializing – not only someone can troll for your personal data, they can now purchase it!


    Myspace is selling data through the reseller InfoChimps. The data that InfoChimps has listed includes ‘user playlists, mood updates, mobile updates, photos, vents, reviews, blog posts, names and zipcodes.’

    So, for everyone that still has some illusions: On the Internet, you should only post data about yourself that you want distributed, or at least which won’t hurt you in any way when they get leaked.


    Talkback and comments are most welcome

    Related posts
    A Simplified Analysis – Can you Forge a Biometric ID?
    Privacy Ignorance – Was Eric Schmidt thinking?
    Google Voice – No Privacy Remains?

  • Management Reaction to Failed Cloud Security

    After all the risk assessments, cost analysis and decisions, you decide to send your data into the cloud. And things are good – at least until the security breach.

    When that happens, every security professional and IT management will get grilled by top management. Youtube has a mockup video that just might give you the feeling of how this will look like.

    Ofcourse, a video of Hitler reacting to a hacked cloud computing service is a bit of an overkill. But be sure that you’ll hear a lot of the sentences that are mocked up, even if not in that tone.

    You can see the video here

    Talkback and comments are most welcome

    Related posts
    Security Concerns Cloud “Cloud Computing”< How to Trust Cloud Computing
    Cloud Computing – Premature murder of the datacenter

  • Microsoft Patch Tuesday – March 2010

    The March update brings two advisories, with eight vulnerabilities covered.

    MS10-016
    : Potential Remote Code Execution in

    • Windows Movie Maker, covering one vulnerability:

    CVE-2010-0265 (Buffer Overflow in Movie Maker and Producer).

    Microsoft rates it as Exploit Index: 1; Deployment Priority: 2.


    MS10-017: Potential Remote Code Execution in

    • Excel
    • Excel Viewer
    • Office for Mac
    • Office Compatibility Pack,
    • Excel Services

    covering 7 vulnerabilities:
    CVE-2010-0257 (Record Memory Corruption)
    CVE-2010-0258 (Sheet Object Type Confusion)
    CVE-2010-0260 (MDXTUPLE Record Heap Overflow)
    CVE-2010-0261 (MDXSET Record Heap Overflow)
    CVE-2010-0262 (FNGROUPNAME Record Uninitialized Memory)
    CVE-2010-0263 (XLSX File Parsing)
    CVE-2010-0264 (DbOrParamQry Record Parsing).

    Microsoft rates it as Exploit Index: 1; Deployment Priority: 2.

  • Cloud Computing Data Protection World Map

    Security and privacy in cloud computing are hot topics, and everyone has a take on it. Cloud computing providers deliver their levels of security and privacy by their internal policies and procedures, but the rigidity of these policies are strongly influenced by government regulations.

    If the country within which a cloud computing provider resides or is registered has lax provisions on privacy, do not expect wonders in the protection of your hosted data – especially since such lax provisions may even be created to allow government agencies to gain access to hosted data.

    Forrester research felt the pulse of things by investigating the regulatory frameworks of countries throughout the world. Here is a brief of the results of this research

    Country-specific regulations governing privacy and data protection vary greatly. To help you grasp this issue at a high level, Forrester created a privacy heat map that denotes the degree of legal strictness across a range of nations.

    You can investigate the map here. To be very sincere, i would like my data to be either in Germany or Argentina. Oh, and USA just got a proverbial slap on the face by being classified in the same category with Colombia, Paraguay and Russian Federation.

    The esteemed senators and congressmen in the USA should think hard about moving up the ladder of privacy and data protection if they don’t want to be soon classified in the same category as China 🙂

    Talkback and comments are most welcome

    Related posts
    Security Concerns Cloud “Cloud Computing”
    How to Trust Cloud Computing
    Cloud Computing – Premature murder of the datacenter

  • Accelerating Security Assessment with MS Security Assessment Tool

    When working on a security assessment, it is always helpful to use an automated tool that compares the key elements to the known best practices, and generates an overview result set.
    Among other tools which can be used, Microsoft has released a tool titled Microsoft® Security Assessment Tool.

    The assessment of this tool strives to identify the business risk of the organization and the security measures deployed to mitigate risk.
    The assessment takes the form of a questionnaire, with Yes/No answers that cover the following areas

    • Infrastructure – Infrastructure security collects information on how the networks function, what business processes (internal or external) it supports, how hosts are built and deployed, and how the network are managed and maintained.
    • Applications – Applications security reviews applications within the organization and assess them from a security and availability standpoint. It examines technologies used within the environment, and reviews the high level procedures an organization can follow to help mitigate application risk
    • Operations and People – This section reviews those processes within the enterprise governing corporate security policies, Human Resources processes, and employee security awareness and training. It also focuses on dealing with security as it relates to day-to-day operational assignments and role definitions.

    The resulting comparison to best practices generates a summary report, as well as much more useful detailed report with areas which are lacking in comparison to the best practices. The report contains a lot of suggestions and links to related products and best practices published by Microsoft.


    The MS Security Assessment Tool and it’s report isn’t a replacement for a full blown analysis, nor it can be a used as a one stop shop for a realistic security analysis. When performing a real analysis, an in-depth review of process and technology is needed.
    MSAT is just a helpful tool to generate a security posture overview and some automated recommendations, so it is a nice start. For everything else, you will need to bring in expert professionals.

    Talkback and comments are most welcome

    Related posts
    WMI Scanning – Excellent Security Tool
    Risk Assessment with Microsoft Threat Assessment & Modeling
    Google’s Ratproxy Web Security Tool for Windows
    Analysis of Windows Security Logs with MS Log Parser
    How To – Malicious Web SIte Analysis Environment

  • Man In The Middle Attack – Explained

    “That’s vulnerable to a man in the middle attack!”

    You’ve probably heard this before, but let’s dive into the details of this attack and understand exactly how it works.

    Definition
    First, a quick definition, a man in the middle (MitM) attack is an attack where the communication which is exchanged between two users is surreptitiously monitored and possibly modified by a third, unauthorized, party. In addition, this third party will be performing this attack in real time (i.e stealing logs or reviewing captured traffic at a later time would not qualify as a MitM)

    While a MitM could be performed against any protocol or communication, we will discuss it in relation to HTTP traffic in just a bit.

    Requirements for Attack
    A MitM attack can be performed in two different ways:

    1. The attacker is in control of a router along the normal point of traffic communication between the victim and the server the victim is communicating with.
    2. The attacker is located on the same broadcast domain (e.g. subnet) as the victim.
    3. The attacker is located on the same broadcast domain (e.g. subnet) as any of the routing devices used by the victim to route traffic.

    We will discuss 2. This is a likely attack that can be used against your neighbors or the person sitting next to you at a coffee house.

    The Attack
    A MitM attack will take advantages of weaknesses in network communication protocols in order to convince a host that traffic should be routed through the attacker instead of through the normal router. In essence, the attacker is advertising that they are the router and the client should update their routing records appropriately. This attack is called ARP spoofing.
    The (greatly simplified) purpose of ARP (Address Resolution Protocol) is to enable IP address to MAC address translations for hosts. This is required so that the packet can reach their final destined host.

    By design, ARP does not contain authentication. Therefore, any host can reply to an ARP request or send an unsolicited ARP response to a specific host. These ARP response messages are used by the attacker to instruct the victim’s machine that the appropriate MAC address for a given IP address is now the MAC address of the attacker’s machine. More specifically, the attacker is instructing the victim to overwrite their ARP cache for the IP->MAC entry for the router. Now, the IP address for the router will correspond to the MAC address for the attacker’s machine.

    What does this mean? Now, all of the victim’s traffic will be routed through the attacker. Of course, we don’t stop here. In order to allow the traffic to reach the Internet, the attacker must configure his system (or attack tool) to also forward this traffic to the original router. In addition, the attacker performs a similar ARP spoofing attack against the router. This way the router knows to send traffic, that was destined for the victim machine, to our attacker instead. The attacker then forwards on the traffic to the victim. This completes the “chain” and places the attacker “in the middle” of the communication.

    Impacts on HTTP

    At this point, the attacker has the ability to view and modify any TCP traffic sent to or from the victim machine. HTTP traffic is unencrypted and contains no authentication. Therefore, all HTTP traffic can be trivially monitored/modified by the attacker.

    What about HTTPS?

    Everything we have talked about thus far is related to getting in the middle of the network communications. This enables the attacker to view most exchanged data, but does not enable the attacker to intercept data exchanged of protocols that implement their own authentication and encryption (e.g. SSH, SSL/TLS)
    But, this is where the fun starts. The purpose of HTTPS is to create a secure communication over top of HTTP by the use of SSL or TLS. On its own SSL/TLS can be very effective and secure. However, there are significant problems in the implementation of SSL/TLS which effectively renders it useless. In addition, the browsers handling of SSL/TLS can lead to issues when both HTTPS and HTTP sites are visited by the user.

    More devious means are needed to perform a MitM against SSL/TLS. At this point the attacker could attempt to intercept HTTPS traffic by using a custom certificate. This would present a certificate warning message in the user’s browser and likely alert the user to the attack. Luckily for the attacker, most users would ignore the warning and continue – thus exposing all of their data.

    Alternatively, the attacker could try and use tools such as SSLstrip to leverage poor application design with regards to SSL/TLS. This could also enable the attacker to obtain the victim’s password over clear text HTTP.

    How concerned should you be?

    The attack scenario described in 2a can be performed by any user on the same broadcast domain as your machine. This means that anyone sitting in the same coffee house on the wireless network could be an attacker. Also, if you connect directly to your Comcast/RoadRunner/ATT/whatever home connection, then many of your neighbors could also perform this attack against you. And if you use a home router instead of directly plugging the connection into your machine – well, then the attack is still possible via 2b (essentially the same attack).

    Really the only reason this isn’t a bigger deal is because of the requirement to be on the same subnet. Right now we have so many other issues, such as XSS, SQL injection, etc, which can all be exploited remotely by attackers. The attackers just sit in their remote locations and destroy web sites from a far. However, the point is this, if an attacker wants to steal YOUR specific bank data then all they need to do is sit next to you at a coffee house or sign up for Internet service in your area.

    This is a guest post by Michael Coates, a senior application security consultant with extensive experience in application security, security code review and penetration assessments. He has conducted numerous security assessments for financial, enterprise and cellular customers world-wide.

    The original text is published on …Application Security…

    Talkback and comments are most welcome

    Related posts
    How To – Malicious Web SIte Analysis Environment
    Security Information Gathering – Brief Example
    DHCP Security – The most overlooked service on the network
    Example – Bypassing WiFi MAC Address Restriction

  • Minimize Impact of Online Intelligence Searches

    In our previous article – Digging for information with Open Source Intelligence we looked at the generic process of information gathering. But what is this process looking for? The answer to this question is important to all parties:

    1. to the investigator – for proper focusing of his/hers efforts
    2. to the possible targets – in order to properly defend against Open Source Intelligence

    So here are the items that the investigator is looking for when employing Open Source Intelligence against a potential target, and the methods of minimizing the possibility of someone discovering something:

    The final goal of any intelligence action is to obtain information that can be sold or used as competitive advantage. This can be as simple as a password, or as complex as plans for a corporate takeover.

    At the information gathering level, this translates into:

    1. Content of files indexed by search engines – In the ideal intelligence world, everything is contained in a single page document that can be scanned or downloaded from the internet. Although such documents won’t surface on the internet unless someone is utterly dumb, bits and pieces of information can be found from files that have found their way on the web and got indexed by the search engines. In order to make such pieces of info useless, hire a person to perform regular ‘Google Hacking’ to find such documents. Bear in mind that once documents are on the internet and get indexed, you cannot destroy all publicly available copies. Instead, change the information within your company to render the public information useless or false. .
    2. Operational or Potential Business Relationships – web sites, news articles, corporate newsletters of partners and providers can contain names and sites of the target company, even forum and support site posts . While these are harmless by themselves, using these names the investigator can establish that there is some relationship between them, even the nature of the relationship. This can be used in a competitive bid, in social engineering or simply leaked to the public. There is no real protection over such information, except of being aware that such information is ‘in the wild’
    3. Real Person Identities – Publicly available names and contact info of any personnel related to the target are a potential gold mine. With the advent of social networks, once you know some one’s name, the investigator can proceed with detailed investigation of such persons, and attempts at breaching of their credentials by trying common password combinations (pet names, birthdates, phone numbers etc). Most companies actually prefer to publish real person’s names and contacts in the effort to appear closer to their potential clients and partners, so there is no direct protection. Much like in point 1, youshould hire a person to perform regular analysis of which names are publicly available, and what information is available on such persons, with a combined penetration test on their accounts. You can also institute a policy and awareness trainings for such persons to make them aware of their exposure.
    4. Relationship Context – this is merely an extrapolation of real identities, business contacts and online communication. It can give the investigator an insight into ‘who receives order from whom’ or ‘who is close to whom’. Such insight is crucial for social engineering attacks. Controlling is actually controlling the previous 3 points.

    In summary, Open Source Intelligence is going to collect information about you and/or your company. You can do little to prevent it, but you can do much to render such information of very little value to anyone.

    Talkback and comments are most welcome

    Related posts
    Digging for information with Open Source Intelligence
    Security Information Gathering – Brief Example
    Corporate Security – Are the hackers winning?

  • Digging for information with Open Source Intelligence

    Wikipedia defines Open source intelligence (OSINT) is a form of intelligence collection management that involves finding, selecting, and acquiring information from publicly available sources and analyzing it to produce actionable intelligence.

    In reality, the methodology used in OSINT is the information gathering phase of every penetration phase. They only stuck a fancy name to the process.

    Regardless of the name, OSINT is very useful, and it’s results can be very well used even outside of the penetration testing process.

    The information gathering, or OSINT process can be summarized in the following steps:

    1. Identify your point of interest – who/what is your target of investigation. Start broad, and then narrow down to the interesting elements. For instance, start with a domain name or an IP address pool for a provider, until you find the contacts and names of actual persons. Then you can start drilling for material left on the Internet by them for further useful clues
    2. Collect information from multiple sources – consult search engines corporate sites, mailing list servers, even the old and forgotten Usenet might be useful
    3. Sift through the gathered information to form a useful result– Identify interesting pieces of intelligence for further use

    The process looks very simple on paper, but bear in mind that most searches generate tons and tons of possible clues and/or false leads. It takes

    Here is what you’ll have to deal with:

    • Irrelevant/false hits on a keyword – URL links or sites that contain the same sequence of words but in totally different context. The more generic the terms that you are searching for, the more of these there will be.
    • Fake contacts placed during registration process – looking for that all important ‘Who’ behind some site or document? Bear in mind that contact information on the web is usually fake to avoid pestering sales persons. And anyone can use your target’s name for an alias on a registration.
    • Hundreds or thousands of archived messages from forums and mailing lists – much like the previous one, aliases and nearly useless communication can be found and needs to be sifted through. And you cannot be certain that you are looking at something written by your target of investigation
    • Documents with irrelevant word matching – a large enough digital book will contain all the words of virtually any phrase

    There are a lot of tools that will help you on your quest for information, but I’ll sum-up those that I find useful

    Google hacking – The title says it all. Choose your keywords and then drill for data on google
    Maltego CE – a client side program that drills the Internet for information on the element that you have chosen as source. It will return all kinds of possible information for further drill down. Produces a lot of false positives
    Silobreaker – an information correlation and pattern recognition system that returns results as summarized information clusters related to your search query. Not always very accurate, so always use other sources.

    Talkback and comments are most welcome

    Related posts
    Security Information Gathering – Brief Example
    Corporate Security – Are the hackers winning?

  • Telco SLA – parameters and penalties

    Communication links provided by Telco providers are critical to most businesses. And as any network admin will tell you, these links tend to have outages, ranging from small interruptions up to massive breakdowns that can last for days.

    When such interruptions occur, businesses suffer, but unless the provider has serious contractual obligations, there is little effort on their side to improve service or correct issues.

    That is why businesses need a good Service Level Agreement (SLA). Usually, the preparation of the SLA is dreaded by most, since it is full of numbers and parameters on which the client must decide what is acceptable, and whose values may be difficult to measure.

    SLA Parameters
    A good SLA is not necessarily loaded with a lot of numbers. You need to work with 2-3 parameters which are important to you. Here are the most frequent SLA parameters, with their acceptable values:

    • Availability – more then 99% for internet, more then 99.5% for corporate data links
    • Packet Loss – less then 0.4% for internet, less then 0.2% for corporate data links
    • Jitter – less then 15ms for internet, less then 5ms for corporate data links

    SLA Penalties
    And you need penalties which will hurt the provider. Penalties are the big stick in the SLA.
    Here are the penalties that you want:

    • small breach of SLA – 25% to 33% of monthly fee
    • large breach of SLA – 50% to 100% of monthly fee


    Be aware that no provider will create an SLA that will eat much of it’s profits. The commited provider can be identified by the type of Service Level Agreement (SLA) that it’s prepared to sign without special negotiations.

    Here are three different levels of SLA’s – not so much by the metrics and parameters, but quite different in terms of penalties

    Talkback and comments are most welcome

    Related posts

    9 Things to watch out for in an SLA
    The SLA Lesson: software bug blues
    5 SLA Nonsense Examples – Always Read the Fine Print

  • Geo Location based DDOS can target Mobile Operators

    The sharp rise of smart mobile phones is introducing a new and concerning attack vector – a geo-location based DDOS.

    Example Scenario
    Imagine a popular mobile application (bejeweled like game) that is downloaded by many.

    1. The app contains a small amount of code to reference the phone’s GPS and also check in with a command and control website.
    2. The attacker decides on a city to target and a popular time of day and then updates the command and control website.
    3. The mobie applications all check in with the C&C site and all mobile applications in the city area begin downloading large video files from YouTube.

    Result?

    • A massive sudden spike in high bandwidth usage of the mobile data network in a single metropolitan area.
    • Most cellular networks run near capacity during the lunch rushes of popular cities. A sudden massive spike such as this would likely push the network over the edge and bring it down entirely.

    This is a tough issue to address and I think it warrants a bit of consideration.

    This is a guest post by Michael Coates, a senior application security consultant with extensive experience in application security, security code review and penetration assessments. He has conducted numerous security assessments for financial, enterprise and cellular customers world-wide.
    The original text is published on …Application Security…

    Talkback and comments are most welcome

    Related posts
    GSM Encryption Broken – Cellular Calls At Risk
    When Will Your Mobile Phone get Hacked?

  • Free VS Commercial Database Vulnerability Scanning

    Part of the vulnerability assessment process must include a vulnerability assessment of your databases.
    And the sad reality is that while there are thousands of tools that focus on Web application and network security scanning, there are very few of them which are doing the same for databases.
    Today we are comparing the results delivered by Scuba by Imperva – a free tool and NGSSQuirreL for SQL by Next Generation Security Software – a commercial tool.

    The tools comparison table
    Here is a side-by-side comparison of functionality and results of both tools


    The results
    To provide the most impartial evaluation of the results, we have generated detailed reports of both tools as PFD files. You can review them and assess the quality yourself.

    Conclusion
    It is evident that the commercial tool beats the free Scuba in every area. But before you jump into a purchase, you need to assess your requirements and expectations.

    So it is very advisable to get the free tool, run it in your environment and understand the results, so you can understand what is missing, and extend your search to a better tool

    Talkback and comments are most welcome

    Related posts
    Thrown in the Fire – Database Corruption Investigation
    Quick and Basic Security Assessment for Databases
    SQL Server Bulk Import – BCP HOW TO

  • IP Spoofing Attack in the real world

    The guest post on IP Spoofing was well visited and caused a lot of interest. One may expect that a lot of visitors actually thought that IP spoofing is a great way to cause a bit of commotion and try out as hackers.

    The reality of the internet is actually quite different. First of all, IP spoofing has been around for decades, and has been the cause of a lot of quite nasty attacks to high profile targets.


    Most serious ISP’s do not want to be related to IP spoofing attacks, and are implementing measures to contain IP Spoofing attacks originating from their networks.

    The containment measures are implemented on their firewalls and routers. The basic logic of this protection is this:

    • A Firewall is aware of the networks to which it connects so it can control source addresses. For example, a demo firewall has 5 interfaces
      • A connecting to network 10.1.1.x
      • B connecting to network 10.2.1.x
      • C connecting to network 10.3.1.x
      • D connecting to network 10.4.1.x
      • ‘outside’ connecting to the rest of the world/internet

    It is expected that any traffic coming on interface A will have a source address of 10.1.1.x. If it doesn’t, it’s most probably an IP spoofing attack and will be dropped. The only interface that cannot apply such logic is the ‘outside’ interface, since it connects the firewall to the rest of the internet. But the outside interface can have another protection, which protects against ‘loop’ IP Spoofing attacks. That means that the ‘outside’ interface cannot see incoming packets with source addresses from a network that is on any of the ‘inside’ interfaces.

    • Routers have a bit more complex mechanism, since a router can have traffic from multiple networks arriving on any of it’s interfaces. They use uRPF (unicast Reverse Path Forwarding) which analyzes whether the packet’s source address comes from a network that is known in the routing domain of the router.

    So in reality, most IP spoofing attempts will be destroyed on the ISP’s network. But these protection measures are not perfect, and there are networks which are still not controlling IP spoofing. An aspiring hacker can do significant damage at networks such as:

    • University networks – apart from the large universities with dedicated IT staff, the netadmins of most universities are the teaching assistants of computer science. And they don’t really make much of an effort to control the traffic on the network as long as the university’s servers and staff systems are protected. Universities are quite often Autonomous Systems, so an IP Spoofing attack originating from an unprotected network will travel on the Internet backbone.
    • Smaller company networks – these networks are usually maintained by the ‘one man band’ sysadmin, who really has too much on his/her’s plate to think about spoofing protection. The silver lining in such environment is that these companies are just a small user of a ISP, who is very capable of blocking the IP Spoofing attack originating from the small company network.
    • ISP’s in developing countries – much like small company networks, manned by personnel who is not properly trained, understaffed and overworked. And the bad news is that these ISP’s are also Autonomous Systems, so IP Spoofing attacks originating there will most probably get out.

    Please note that this article is not an invitation to start wreaking havoc on these networks, on the contrary, it should serve as a reminder for their netadmins to implement the available and quite simple protection measures.

    Talkback and comments are most welcome

    Related posts
    Summary of IP Spoofing
    Corporate Guest WLAN – The best place for Eavesdropping to Interesting Traffic
    5 Rules to Home Wi-Fi Security
    Example – Bypassing WiFi MAC Address Restriction
    Obtaining a valid MAC address to bypass WiFi MAC Restriction

  • Protecting from the CCenter Malware and Trojan

    A very common method of distributing malware is disguising it as a useful program. Most common disguises, apart from games are ‘malware removal programs’. This is the approach used by CCenter a.k.a. Control Center.

    If you find a process with the name ccenter.exe running on your pc means that your pc has possibly been infected with a trojan known as infostealer.lemir.h.
    Infostealer.Lemir.H is a Trojan horse program that attempts to steal passwords for the Legend of Mir 2 online game, but can be modified to steal other information.

    Apart from installing a trojan, CCenter intimidates people into buying the paid version of this program. Once it’s installed CCenter loads an imitation of system scan every time a computer is started. It also generates large amounts of counterfeit security alerts. All these alerts are designed only to trick people into taking the program as a legitimate and reputable tool. If clicked upon, the pop-ups demand paying for using CCenter.

    CCenter has also been seen to redirect the web browser to malicious and fraudulent websites. Depending on version and programmer skill, it may also disable reputable security programs leaving the compromised machine open to future attacks.

    Here are the steps to manually remove CCenter

    1. Use “Add or Remove Programs” to remove the installation. However bear in mind that there may be hidden CCenter files, running processes and registries in your computer, so CCenter may recreate all other files after reboot.
    2. Stop and remove CCenter processes:
      • ccagent.exe
      • ccmain.exe
      • uninstall.exe
    3. Find and delete all CCenter files found in %AppData%\CCenter\ccagent.exe

    There are other similar Malware programs in the wild. We will cover them in the following articles.

    Talkback and comments are most welcome