Vulnerability assessment tools:
the end of an era?
Vulnerability assessment tools, traditionally, amass a monumental
amount of flaws. A continuously growing number of vulnerabilities
means that the tools need to be constantly updated.
This means that the number of vulnerabilities appears overwhelming. Also, not all of
these flaws are of significance to security. Host-based patch management systems
bring a coherence to the chaos. The clear advantages of using these tools question
the value of traditional vulnerability assessment tools. Andrew Stewart describes the
advantages of using patch management technologies to gather vulnerability data. He
proposes a lightweight method for network vulnerability assessment, which does not
rely on signatures, or suffer from information overload. Turn to page 7....
Featured this month
NEWS
Tips to defeat DDoS
2
Qualys ticks compliance box
2
Russian hackers are world class
3
FEATURES
De-perimeterisation
Inside out security:de-perimeterisation 4
Vulnerabilities
A contemporary approach to network
vulnerability assessment
7
Cryptography
Crypto race for mathematical infinity
10
Biometrics
Biometrics: the eye of the storm
11
Proactive security
Proactive security: vendors wire the cage
but has the budgie flown...
14
PKI
Managing aspects of secure messaging
between organizations
16
RFID
RFID: Misunderstood or untrustworthy
17
Snort
Network Security Manager’s preferences for
the Snort IDS and GUI add-ons
19
REGULAR
News in brief
3
Contents
April 2005 ISSN 1353-4858
ISSN 1353-4858/05 © 2005 Elsevier Ltd. All rights reserved
This journal and the individual contributions contained in it are protected under copyright by Elsevier Ltd, and the following terms and conditions apply to their use:
Photocopying
Single photocopies of single articles may be made for personal use as allowed by national copyright laws. Permission of the publisher and payment of a fee is required for all other photocopying, including multiple or
systematic copying, copying for advertising or promotional purposes, resale, and all forms of document delivery. Special rates are available for educational institutions that wish to make photocopies for non-profit
educational classroom use.
Tips to defeat DDoS
From the coal face of Bluesquare
Online gambling site, Bluesquare, has survived brutal distributed denial-
of-service attacks, and CTO, Peter Pederson presented his survival check-
list at a recent London event.
Pederson held his ground by refusing to pay DDoS extortionists who took
Bluesquare's website down many times last year. He worked with the National Hi-
Tech Crime Unit to combat the attacks and praised the force for its support.
Speaking at the E-crime congress, Pederson, played a recording of the chilling voice of
an extortionist, who phoned the company switchboard demanding money.
After experiencing traffic at 300 Megabits per second, Pederson said he finds it amus-
ing when vendors phone him with sales pitches boasting that they can stop weaker
attacks. He has seen it all before. Story continued on page 2...
RFID – misunderstood or untrustworthy?
The biggest concern with RFID is the ability to track the location of a per-
son or asset. Some specialized equipment can already pick up a signal
from an RFID tag over a considerable distance.
But an RFID tag number is incomprehensible to a potential attacker without access to
a backend database. The problem is that an attacker may get access to such a data-
base. Bruce Potter examines if RFID really is a sinister security nightmare. Turn to
page 17...
NEWS
Editorial office:
Elsevier Advanced Technology
PO Box 150
Kidlington, Oxford
OX5 1AS, United Kingdom
Tel:+44 (0)1865 843645
Fax: +44 (0)1865 853971
E-mail: s.hilley@elsevier.com
Website: www.compseconline.com
Editor: Sarah Hilley
Supporting Editor: Ian Grant
Senior Editor: Sarah Gordon
International Editoral Advisory Board:
Dario Forte, Edward Amoroso, AT&T Bell Laboratories; Fred
Cohen, Fred Cohen & Associates; Jon David, The Fortress;
Bill Hancock, Exodus Communications; Ken Lindup,
Consultant at Cylink; Dennis Longley, Queensland
University of Technology; Tim Myers, Novell; Tom Mulhall;
Padget Petterson, Martin Marietta; Eugene Schultz,
California University, Berkeley Lab; Eugene Spafford,
Purdue University; Winn Schwartau, Inter.Pact
Production/Design Controller:
Esther Ibbotson
Permissions may be sought directly from Elsevier Global
Rights Department, PO Box 800, Oxford OX5 1DX, UK;
phone: (+44) 1865 843830, fax: (+44) 1865 853333, e-mail:
permissions@elsevier. com. You may also contact Global
Rights directly through Elsevier’s home page (http://
www.elsevier.com), selecting first ‘Support & contact’, then
‘Copyright & permission’.
In the USA, users may clear permissions and make
payments through the Copyright Clearance Center, Inc., 222
Rosewood Drive, Danvers, MA 01923, USA; phone: (+1)
(978) 7508400, fax: (+1) (978) 7504744, and in the UK
through the Copyright Licensing Agency Rapid Clearance
Service (CLARCS), 90 Tottenham Court Road, London W1P
0LP, UK; phone: (+44) (0) 20 7631 5555; fax: (+44) (0) 20
7631 5500. Other countries may have a local reprographic
rights agency for payments.
Derivative Works
Subscribers may reproduce tables of contents or prepare
lists of articles including abstracts for internal circulation
within their institutions.
Permission of the Publisher is required for resale or distrib-
ution outside the institution.
Permission of the Publisher is required for all other deriva-
tive works, including compilations and translations.
Electronic Storage or Usage
Permission of the Publisher is required to store or use elec-
tronically any material contained in this journal, including
any article or part of an article.
Except as outlined above, no part of this publication may be
reproduced, stored in a retrieval system or transmitted in
any form or by any means, electronic, mechanical, photo-
copying, recording or otherwise, without prior written per-
mission of the Publisher.
Address permissions requests to: Elsevier Science Global
Rights Department, at the mail, fax and e-mail addresses
noted above.
Notice
No responsibility is assumed by the Publisher for any injury
and/or damage to persons or property as a matter of prod-
ucts liability, negligence or otherwise, or from any use or
operation of any methods, products, instructions or ideas
contained in the material herein. Because of rapid advan-
ces in the medical sciences, in particular, independent veri-
fication of diagnoses and drug dosages should be made.
Although all advertising material is expected to conform
to ethical (medical) standards, inclusion in this publication
does not constitute a guarantee or endorsement of the
quality or value of such product or of the claims made of
it by its manufacturer.
02158
Printed by
Mayfield Press (Oxford) LImited
Qualys ticks compliance
box
Brian McKenna
V
ulnerability management vendor,
Qualys, has added new policy
compliance features to its
QualysGuard product. This allows
security managers to audit and
enforce internal and external policie
on a 'software as a service' model, the
company says.
In a related development, the company
is trumpeting MasterCard endorsement
for the new features set.
Andreas Wuchner-Bruehl, head of
global IT security at Novartis comment-
ed, in a statement: that: "Regulations
such as the Sarbanes-Oxley Act and
Basel II [mean that] much of the burden
now falls on IT professionals to assure
the privacy and accuracy of company
data. In this environment, security man-
agers must tie their vulnerability man-
agement and security auditing practices
to broader corporate risk and compliance
initiatives."
Philippe Courtot, chief executive offi-
cer, Qualys said: "security is moving
more and more to policy compliance.
For example: are your digital certificates
up to date? We offer quick deployability
since we are not selling enterprise soft-
ware, but providing it as service.
Customers don't have software to
delploy, and Qualys scans on a continu-
ous basis".
"In 2004 Sarbox was all about keep-
ing C-level executives out of jail, but we
are moving beyond that now. The
opportunity is to streamline the best
practices generated out of Sarbox con-
sulting as it relates to the security in
your network".
The latest version of QualysGuard has
been endorsed by MasterCard. The vul-
nerability management vendor has com-
pleted the MasterCard Site Data
Protection (SDP) compliance testing
process.
From 30 June, this year, MasterCard
will require online merchants
processing over $125,000 in monthly
MasterCard gross volume to
perform an annual self-assessment and
quarterly network scan.
"The payment card industry's security
requirements (PCI, SDP, Visa CISP)
apply to all merchants with an Internet
facing IP, not just those doing E-com-
merce, so the magnitude of retailers this
program affects is significant," said
Avivah Litan, vice president and research
director at Gartner.
Qualys says it achieved compliance sta-
tus by proving their ability to detect,
identify and report vulnerabilities com-
mon to flawed web site architectures and
configurations. These vulnerabilities, if
not patched in actual merchant websites,
could potentially lead to an unautho-
rized intrusion.
"The payment card industry's security
standards are converging, which will
simplify the compliance process, but
achieving compliance with these stan-
dards can still be very costly for both
merchants and acquiring banks. The
more the process can be streamlined and
automated, the easier it will be for every-
one," said Litan.
Network Security
April 2005
2
Tips to defeat DDoS
(continued from page 1)
The DDoS Forum was formed in
response to the extortionist threat to
online gambling sites. Pederson is
adamant about not paying up.
Peter Pederson’s survival checklist
against DDoS attacks:
• Perform ingress and egress
filtering.
• Consolidate logs.
• Perform application level checks.
• Implement IDS.
• Implement IPS.
• Check if 3rd party connections
are open.
• Capture current network traffic.
• Monitor current system states.
• Maintain current patches.
• Put procedure and policies in
place to handle DDos attacks.
April 2005
Network Security
Microsoft talks up security
After 25 years of complaints about the
poor security of its products, Microsoft
has published a 19-page booklet, The
Trustworthy Computing Security Develop-
ment Lifecycle, that outlines the "cradle to
grave" procedures for a mandatory "Security
Development Lifecycle" for all its Internet-
facing products.
The new process "significantly reduces" the
number and lethality of security vulnerabili-
ties, it says. The new approach comes from
Bill Gates and Steve Ballmer, Microsoft's
chairman and chief executive. So far software
produced using the SDL framework includes
Windows Server 2003, SQL Server 2000
Service Pack 3 and Exchange 2000 Server
Service Pack 3.
Windows Server gets extra protection
Windows Server 2003's new Service Pack
1 allows Windows servers to turn on their
firewalls as soon as they're deployed, and
to block inbound Internet traffic until
Windows downloads Microsoft's latest securi-
ty patches.
A new security configuration wizard detects
a server's role as a file server, Web server, or
database host, for example, and then disable
the software and ports not associated with that
role. It also makes DCOM, Microsoft's tech-
nology for distributed objects, less prone to
attack, the firm says.
VoIP vulnerabilities addressed
Security worries are holding up adoption
of VoIP. Even so, research from In-Stat/
MDR suggests penetration will reach 34%
among mid-sized businesses, and 43% in large
enterprises.
To increase adoption rates, the new Voice
over IP Security Alliance (VOIPSA) has creat-
ed a committee to define security standards for
Internet telephony networks.
In large networks, the bandwidth and time
associated with routing traffic and spam creates
a latency problem for VoIP traffic through the
firewall. Other topics include security technol-
ogy components, architecture and network
design, network management, and end-point
access and authentication, infrastructure weak-
nesses, vulnerabilities and emerging application
attacks.
Warp speed, Mr Plod
The British government has set up six Warps
(warning advice and reporting points) to allow
businesses to share confidential information
about risks, security breaches and successful
countermeasures, and to receive tailored secu-
rity alerts.
The government also promised a Warp to
show home computer users how to improve
PC security and lower the risk of them
becoming staging posts for hackers attacking
businesses. The US and Holland are consider-
ing creating similar programmes, says the
National Infrastructure Security Co-ordina-
tion Centre (NISCC), which is co-ordinating
the scheme.
Don't trust hardware
Hardware devices are as insecure as any IT sys-
tem, Joe Grand, CEO of Grand Idea told del-
egates at the Amsterdam Black Hat confer-
ence. Attacks include eavesdropping, disrupt-
ing a hardware security product, using undoc-
umented features and invasive tampering.
Network appliances, mobile devices, RFID
tokens and access control devices are all poten-
tially at risk. The storage of biometric charac-
teristics on back-end systems also sets up
avenues of attack, and physical characteristics
are often easily stolen or reproduced.
Researchers recently showed how to exploit
cryptographic weaknesses to attack RFID tags
used in vehicle immobilisers and the Mobil
SpeedPass payment system. SSL cryptographic
accelerators are also potentially hackable, as
demonstrated by a recently documented attack
against Intel's NetStructure 7110 devices.
Wireless Access Points based on Vlinux, such
as the Dell TrueMobile 1184, can also be
hacked.
Security through obscurity is still widely
practiced in hardware design but hiding some-
thing does not solve the problem, Blackhat del-
egates were told.
IM creates instant havoc
Security threats from Instant Messages have
increased 250% this year, according to a
report from IMlogic Threat Center. The
research tracks viruses, worms, spam and
phishing attacks sent over public IM net-
works. It found reported incidents of new IM
threats grew 271% so far. More than half the
incidents happened at work via free IM ser-
vices such as AOL Instant Messenger, MSN
Messenger, Windows Messenger, and Yahoo
Messenger.
Israel jails colonel for losing PC
The Israeli army jailed the commander of an
elite Israel Defense Forces unit for two weeks
for losing a laptop computer containing clas-
sified military information. The laptop
should have been locked away, but was appar-
ently stolen while he was on a field trip with
his soldiers.
NEWS
Russian hackers are
world class
Brian McKenna
R
ussian hackers are “the best in the
world” Lt. General Boris
Miroshnikov told the eCrimes Congress
in London on 5 April. “I will tell them
of your applause”, he told the clapping
audience at the start of a speech
reporting on cyber crime developments
in the region.
Boroshnikov is head of Department K,
established within Russian law enforce-
ment to deal with computer crime in
1998. His department has worked close-
ly with the UK's National Hi-Tech
Crime Unit.
Countries, like Russia, he said, that
came late to the internet exhibit its
problems more dramatically. From
2001-3, computer crime in Russia
doubled year on year, he confirmed.
“Only in 2004 did we hold back the
growth”.
"It used to be naughty boys who com-
mitted these crimes”, he said, “but now
they have grown up”. It now needs the
co-operation of telecoms companies,
ISPs, the legal profession, and law
enforcement to tackle the problem, he
said.
Alan Jebson, group COO at HSBC
holdings, echoed the Russian’s rueful
‘boast’. "We are up against the best”,
he said at the same event. “Some of
these Russian hackers have day jobs
designing highly secure encryption
technologies”.
"We must have comparable laws and
sanctions. We need to agree what is a
computer crime”.
He reported that when Department K
was in its infancy “80% of computer
crime was out of sight. We are now get-
ting better because the victims know
who to come to and we have had no
leaks of victim identity”.
He concluded that there is a strong
need in Russia for state standards that
will keep out the “charlatans of comput-
er security”.
3
In brief
Network Security
April 2005
4
DEPERIMETERISATION
If you’re into IT security, it’s pretty hard
to avoid discussions about deperimiteri-
sation: the loosening of controls at
boundary level in favour of pervasive
security throughout the network, sys-
tems and applications. The idea’s not
new, but it’s certainly a hot topic right
now, which is being led by some formi-
dable CSOs in major blue-chips who
have come together to create the Jericho
Forum, to promote the idea. Everybody
seems to be talking about it – and while
there are senior IT managers and securi-
ty experts who are fully and publicly
embracing the idea, there are also those
who are feeling more than a little appre-
hensive about this talk of breaking down
the barriers at the edge of the network.
After all, it’s just not safe out there – and
we’ve all seen the statistics to prove it.
But opening up the networks provide
us with opportunities as well as threats.
It’s time to stop looking at security from
the outside, and focus instead on looking
at security from the inside out.
Manning the battlements
The fact that de-perimiterisation is caus-
ing some worried muttering within the
security community is not that surpris-
ing. For years we have been working
towards attaining the goal of a network
boundary that is 100 percent secure.
Security managers have tended to adopt
a siege mentality, and softer boundaries
appear to be contrary to everything that
we are working for.
But we need to stop thinking of our
network as a medieval citadel under
attack. After all, those fortresses, with
their thick, high stone walls, were excel-
lent at deflecting an enemy for a fixed
period of time. But once that enemy got
inside the walls, the fight was over with-
in a matter of hours. The same is true of
most IT networks. Once the hard outer
shell has been penetrated, it is fairly
straightforward to run rampage through
IT systems and cause untold amounts of
havoc.
And of course, barricading yourself
behind high walls doesn’t let the good
guys in, doesn’t stop internal attacks
from rebellious subjects, and isn’t exactly
flexible. But flexibility is what the mod-
ern business is all about. Firms need to
expand. They want their salespeople to
remain connected through their mobile
devices and remote access. They want to
collaborate easily with partners and inte-
grate business processes with customers
and suppliers. Unlike fixed stone walls,
the boundaries of the modern business
are shifting all the time.
Seizing opportunities
This is not the time for security experts
to revert to their negative, jackbooted
stereotype. The ‘trespassers will be prose-
cuted’ signs – along with the negative
expressions and shaking heads – need to
be abandoned. Although we all like to
think of ourselves as knights in shining
armour, rescuing our organizations from
marauding outsiders, it’s time to update
this self-image. The fact is we need to
be modern, twenty-first century
intelligence agents, not twelfth century
warriors.
Instead we should see these new devel-
opments as an opportunity. Let’s face it,
100% security of the network boundary
has always been an almost impossible
task. As Gene Spafford, Director,
Computer Operations, Audit, and
Security Technology at Purdue
University put it: “The only system
which is truly secure is one which is
switched off and unplugged, locked in a
titanium lined safe, buried in a concrete
bunker, and is surrounded by nerve gas
and very highly paid armed guards. Even
then, I wouldn’t stake my life on it…”
Nor would you be able to use it.
Added to that of course, is the fact
that boundaries keep moving: new
devices, new locations, additional busi-
ness partners, illicit downloads and the
latest applications, all add to the ever-
expanding perimeter, making it increas-
ingly difficult to define, never mind
secure. And then there’s the weakest link
of all: the people. Employees, being
human, insist on making basic mistakes
and leaving their passwords lying around
or opening dubious attachments.
De-perimiterisation can, therefore, be
seen as a chance to stop going after the
impossible, and to focus effort on
achieving acceptable levels of risk. No
more tilting at windmills. No more run-
ning to stand still.
More than that, this is a real opportu-
nity to align security with overall organi-
sational strategy, and to prove the value
that it adds to the organisation. To do
that, we need to understand where the
call for opening up the networks is com-
ing from.
Harnessing the drivers
De-perimiterisation is driven by several
business needs. Firstly, the desire for the
‘Martini principle’ - anytime, anyplace,
anywhere computing. Mobile and flexi-
ble working have become a normal part
of the corporate environment. This is
Inside out security:
de-perimeterisation
Ray Stanton, global head of BT security practice
Gone are the days of fortress security
“
De-perimiter-
isation is a
chance to stop
going after the
impossible
”
Ray Stanton
April 2005
Network Security
5
DEPERIMETERISATION
happening by default in many organisa-
tions, who now wish to take control and
effectively manage the multitude of ven-
dors, applications, devices and docu-
ments that are springing up throughout
the company.
The second driver is cost. Accessing
applications through a broadband
enabled device, using XML or Web ser-
vices, reduces the costs associated with
connectivity and maintenance of leased
lines, private exchanges and even VPNs.
At the same time it increases availability,
through the ‘always on’ connection, and
so flexibility.
Finally, there is a need for approved
third parties to gain access. In the digi-
tal networked economy, collaborative
working models with partners, joint ven-
tures, outsourcers or suppliers require
secure access to data in real time – which
cannot be achieved with a tough impen-
etrable network boundary.
If we look at the oil and gas indus-
tries, which have been early adopters of
de-perimiterisation – or ‘radical exter-
nalisation’ as it is known in BP – we
can see clear examples of all of these
drivers. Significant numbers of workers
are on the road or in remote locations
at any given time. Companies tend to
make a great deal of use of outsourcers
and contractors, and undertake joint
ventures with other firms who are part-
ners in one region but competitors in
another. As a result they have long
recognised the need to let partners
have access to one part of the system,
while keeping the doors firmly barred
on others.
In fact around 10% of BP’s staff now
access the company’s business applica-
tions through the public Internet, rather
than through a secure VPN. This is the
first step in a move towards simplifica-
tion of the network and enabling access
for up to 90,000 of the oil company’s
third party businesses.
This picture of a flexible, cost effec-
tive, and adaptable business is, not sur-
prisingly, very attractive. And not just to
those in hydrocarbons. But efforts to
achieve it can be hampered by current
security thinking. As experts, we need to
reverse this, and be seen as an enabler
once more. Our responsibility is to
make sure that everyone is aware of the
risks and can make informed decisions.
After that, it’s about putting adequate
controls in place. This shift in thinking
offers us a real possibility that security,
indeed IT as a whole, can be brought in
from the cold and get a much-needed
voice at board level.
Back to basics
But before we tear down the firewalls
and abandon ourselves to every virus
infestation out there, let’s take a look at
what ‘inside out’ security really involves.
De-perimiterisation is actually some-
thing of a misnomer. It’s not about get-
ting rid of boundaries altogether. Rather
it’s a question of re-aligning and refocus-
ing them. So instead of a single hard
shell round a soft centre, an organisation
has a more granular approach with inter-
nal partitions and boundaries protecting
core functions and processes – hence the
inside out approach. Typically the hard
controls around the DMZ (demilitarised
zone) will move to sit between the red
and amber areas, rather than the amber
and green.
This takes us back to some basic prin-
cipals of security management: deciding
what bits of your systems and accompa-
nying business processes are key and
focusing on their security. Rather than
taking a ‘one size fits all’ approach,
inside out security requires us to look at
protecting our information assets from
the perspective of what needs to be
secured and at what level.
The decision should be based upon
another fundamental tenet of good secu-
rity practice: thorough assessment of
risk. That customer database from three
years ago may be of limited value now,
but if the contents are leaked, the conse-
quences could be disastrous.
Although policy control and manage-
ment has always been a fundamental fac-
tor in any security measures, it will take
a far more central role than it has
enjoyed so far. Federated security, gran-
ulated access and rotating users all
demand close control. Updates to policy
that reflect both changes within the
organisation and to its immediate envi-
ronment, will be required on a more reg-
ular basis than ever before.
We also need to make sure that we still
get the basics right. For example, viruses
are not going to go away: there will
always be new variants and new vulnera-
bilities. The 2004 edition of the DTI
information breaches survey shows that a
massive 74% of all companies suffered a
security incident in the previous year,
and 63% had a serious incident. Viruses
still counted for 70% of these, which
seems to indicate that despite their
prevalence, there is still a lack of maturi-
ty in incident management procedures.
Firewall vendors don’t need to panic
just yet – there is still going to be a need
for their products in a deperimiterised
system. The difference is these will no
longer sit at the very edge of the network,
but will be strategically placed inside it,
at device, data or even application level.
Identity management
While firewalls may sort the ‘good’
HTTP traffic from the bad, they cannot
“
Firewalls will
no longer be at
the edge of the
network
”
Some of the companies that
are breaking down the
barriers as members of the
Jericho Forum:
• Boeing
• British Broadcasting
Corporation
• Deutsche Bank
• Lockheed Martin
• Pfizer
• Reuters
• Unilever
Network Security
April 2005
6
DEPERIMETERISATION
discern the difference between authorized
and unauthorized traffic. You also need
to identify what and who you trust from
both internal and external sources: which
of your own people should have access to
what systems and processes, and where
you are going to allow partners, cus-
tomers and the public to go. That means
that user authentication and identity
management is going to play an increas-
ingly important role – with two factor
authentication being the bare minimum.
Access policies will become more pre-
cise, based on a ‘least privilege’ model, to
ensure that only the parts of the system
required for the job will be available.
Like all policies this will need to be
monitored and updated to match
employees moving through the organisa-
tion, and to keep up with changing rela-
tionships with partners.
Identity management will ensure that
no unauthorized personnel have access
to any part of the system, and will be a
major factor in maintaining compliance.
With a more open network, organisa-
tions will still have to prove that
confidential data on personnel or finan-
cial management has not been subject to
unauthorized access. With the Data
Protection Act, human rights legislation,
Sarbanes-Oxley, European accounting
standards and a dozen other rules and
regulations to navigate, providing accu-
rate audit trails of who has accessed, or
attempted to access, critical data will
remain a basic legal requirement.
You can never be too thin
It almost goes without saying that iden-
tity management is much easier when
the identities belong to an organiza-
tion’s own employees. Enforcing policy
at a partner organization is that much
harder.
And, given that it is hard enough to
ensure that your own users have config-
ured their devices properly, it seems
unlikely that any of us will be able to
guarantee that partners have done so.
But this is crucial, since ill-configured
laptops and PDAs represent a significant
security risk at both the outer edge and
in the core of the network.
It seems that inside out security will
act as an impetus towards a more thin-
client based architecture. Centralised
systems are easier to secure than docu-
ments, applications, data and network
connection spread over different gadgets
and different locations. It eliminates the
problems associated with accessing the
network with inappropriate devices.
In one company that has already adopt-
ed de-perimiterisation, employees are
responsible for their own laptops including
the latest patches and anti-virus protection.
But the laptops are thin clients, which
means that IT staff can focus on the secu-
rity of the central server and information
on it, rather than trying to secure an unde-
fined group of peripheral appliances.
Whether there will be a mass migra-
tion to thin client models – or even on-
demand, utility computing, which
seems to be the next logical step – is
impossible to predict. What we do
know is that the move to inside out
security, radical externalisation, de-
perimiterisation or whatever other names
it acquires, will depend on architecting
the environment correctly – and main-
taining the right levels of control. A flex-
ible working model for information
security management systems that can
match the flexibility of the business as a
whole is also going to be vital.
The debates about de-perimiterisation
will doubtless continue. There is still a
lot of work to be done on standards and
interoperability of systems. But what we
can be pretty sure of is that security
experts should prepare themselves for a
fundamental change in approach.
More Information:
http://www.opengroup.org/jericho
About the author
Ray Stanton is Global Head of Security
Services at BT. He has over six years expe-
rience in Information Services and 21
years in IT Security.Ray has worked for
both government and commercial organi-
zations in a variety of security related roles
including project management, security
auditing, policy design, and the develop-
ment of security management strategies.
De-perimeterisation - the end of fortress mentality
April 2005
Network Security
7
VULNERABILITIES
The roots of this problem lie in the
fact that the competitive and
commercial drivers that shaped the
early market for network vulnerability
assessment products continue to have
influence today.
These historical goals no longer reflect
the needs of modern businesses, howev-
er. A shift in requirements has occurred,
due to the now widespread use of patch
management technologies.
In this paper I describe the advan-
tages in using patch management tech-
nologies to gather vulnerability data. I
also propose a lightweight method for
network vulnerability assessment,
which does not rely on signatures, and
which does not suffer from information
overload issues.
The effect of historical
market forces
In the formative years of the commer-
cial network vulnerability assessment
market, the number of vulnerability
“checks” that vulnerability assessment
tools employed was seen as a key metric
by which competing products could be
judged. The thinking was that the
more checks that were employed by a
tool, the more comprehensive it would
be, and thus the more value its use
would provide.
Vendors were also evaluated on how
quickly they could respond to newly
publicised security vulnerabilities. The
quicker a vendor could update their
product to incorporate the checks for
new vulnerabilities, the better they
were perceived to be. In some respects
this is similar to the situation today
where software vendors are judged
by the security community on their
timeliness to release patches for security
problems that are identified in their
products.
The market's desire for a comprehen-
sive set of vulnerability checks to be
delivered in a timely fashion spurred the
manufacturers of network vulnerability
assessment tools to incorporate ever-larg-
er amounts of checks into their prod-
ucts, and to do so with increasing rapid-
ity. Some vendors even established
research and development teams for the
purpose of finding new vulnerabilities.
(An R&D team was also an opportunity
for vendors to position and publicize
themselves within the marketplace.)
Vendors were said to have sometimes
sought competitive advantage through
duplicitous means, such as by slanting
their internal taxonomy of vulnerability
checks in order to make it appear that
they implemented more checks than in
reality.
A common practice was for vendors
to create checks for any aspect of a
host that can be remotely identified.
This was often done regardless of its
utility for security. As an example,
it is not unusual for network vulnera-
bility scanning tools to determine the
degree of predictability in the IP
identification field within network
traffic that a target host generates.
While this observation may be useful
in certain circumstances, the
pragmatic view must be that there
are far more influential factors that
can influence a host's level of vulnera-
bility. Nonetheless, network
vulnerability assessment products
typically incorporate hundreds of such
checks, many with similarly question-
able value.
Information overload
The result of these competitive drivers
has been that when a network
vulnerability scanner is run against
any network of reasonable size, the
printout of the report is likely to
resemble the thickness of a telephone
directory. An aggressive approach to
information gathering coupled with
an ever increasing set of vulnerabilities
results in an enormous amount of
information that can be reported.
Such a large amount of data is not
only intimidating, but it severely limits
the ability to make key insights about
the security of the network. The
question of “where to begin?” is a
difficult one to answer when you are
told that your network has 10,000
“vulnerabilities”.
Vendors of network vulnerability
assessment products have tried to
address this information overload prob-
lem in several ways. One approach has
been to attempt to correlate the output
of other systems (such as intrusion
detection systems) together with vul-
nerability data to allow results to be
prioritised. Another approach has been
to try and “fuse” data together on the
basis of connectedness, in order to
Andrew Stewart
Modern network vulnerability assessment tools suffer from an
“information overload” problem
A contemporary
approach to
network vulnerabili-
ty assessment
Andrew Stewart
“
A network vuln.
scanner report
can be as thick
as a phone
directory
”
Network Security
April 2005
8
VULNERABILITIES
increase the quality of data at a higher
layer. These approaches have spawned
new categories of security product, such
as “Enterprise Security Management”
(ESM), “Security Information
Management” (SIM), and
“Vulnerability Management”.
But rather than add layers of abstrac-
tion (and products to buy), the solution
would logically lie in not gathering so
much data in the first place. This has
now become a viable strategy, because
of the capabilities provided by modern
patch management technologies.
The rise of patch
management
The widely felt impact of Internet
worms has opened the eyes of businesses
to the importance of patching systems.
Host-based patch management products
such as Microsoft's SMS (Systems
Management Server) and SUS (Software
Update Services) are now in wide
deployment, as are other commercial
and freeware tools on a variety of plat-
forms. See for example, PM (2005) and
Chan (2004).
In many respects, this increased focus
on patch management has diminished
the traditional role of network vulnera-
bility assessment tools. If the delta
between current patch status and the
known set of vulnerabilities is already
being directly determined on each indi-
vidual host, then there is less need to use
a network vulnerability assessment tool
to attempt to collect that same informa-
tion (and to do so across the network
and en masse).
An advantage here is that it is a rela-
tively straightforward task for a software
agent running on a host to determine
the host’s patch level. A network vulner-
ability scanner has to attempt to remote-
ly infer that same information, and this
task is made more difficult if the vulner-
ability scanner has no credentials for the
target host.
Another advantage to using a host-
based model for gathering patch data is
that with an ever-increasing set of vul-
nerability checks being built into net-
work vulnerability assessment tools, the
probability increases that a check might
adversely affect a network service on a
box. The result might be that the scan
causes services to crash, restart, or oth-
erwise misbehave. The days when port
scanning would crash the simplistic net-
work stack within printers and other
such devices are probably behind us,
but a business might rightly question
the use of increasingly complex vulnera-
bility checks to interrogate production
systems.
With an ever-increasing number of
checks, the impact on network band-
width when a network vulnerability
assessment tool is run also climbs.
(Rate-limited and distributed scanning
can help here, but these involve addi-
tional complexity.)
There are disadvantages to employing
a host-based model, however. Products
which require that an agent be installed
on hosts have usually been seen as
time-consuming to deploy and
complex to manage. Indeed, the value
proposition of network vulnerability
assessment tools was, in part, that they
did not require a roll-out of host-based
agents. With the now widespread use
of agent-based patch management
technologies, this barrier has been
overcome.
Given the advantages in using a host-
based model to gather patch status infor-
mation, do network vulnerability assess-
ment tools still have a role to play? In
discovering new vulnerabilities, or for
discovering vulnerabilities in bespoke
applications (such as Web applications),
network vulnerability assessment tools
clearly add value. But this is somewhat
of a niche market. These are not activi-
ties that businesses typically wish to per-
form against every device within their
network environment, or on a regular
basis. (Scanning a DHCP allocated net-
work range provides little value if the
DHCP lease time is short, just as one
example.)
A modern approach
It is a widely held belief amongst securi-
ty practitioners that the majority of
security break-ins take advantage of
known vulnerabilities. While there is
no concrete evidence for this claim, on
an intuitive basis it is probably correct.
In most cases, the patch for a known
vulnerability already exists, or the ven-
dor affected is in the process of creating
the patch. (In that latter scenario, the
version numbers of the particular oper-
ating systems or applications that are
known to be vulnerable are usually
known, even if the patch itself is
not yet available.)
A patch management solution can
determine the presence or absence of
patches on hosts, and can also identify
the current version number of operating
systems and installed applications. A
patch management solution can there-
fore be used to determine vulnerability
status. The depth of reporting that
modern patch management tools pro-
vide in this area has in many respects
already surpassed the capabilities of
conventional network vulnerability
assessment tools. This is possible
because of the advantages inherent in a
host-based model.
service
count
telnet
20
ssh
79
rlogin
3
http
52
https
26
ldap
8
vnc
9
ms-term-serv
30
pcanywheredata
2
irc
1
Table 1: Display of services running on hosts
April 2005
Network Security
9
VULNERABILITIES
However, host-based patch manage-
ment tools only have visibility into the
hosts onto which an agent has been
installed. Organizations still need some
form of network assessment in order to
detect changes that lie outside the visibil-
ity of their patch management infra-
structure.
I suggest that this task can be accom-
plished using traditional network inter-
rogation techniques, and does not
require a library of vulnerability checks.
Well-documented techniques exist for
gathering data related to the population
of a network, the services running on
hosts within the network, and the identi-
fication of operating systems type
(Fyodor, 1997, 1998). These techniques
do not require a constant research effort
to develop new vulnerability checks.
A port scanner written in 1990 could
still be used today, whereas a vulnerabili-
ty scanner from the same year would
be considered woefully inadequate
because it has no knowledge of modern
vulnerabilities.
The information that can be gathered
using these relatively simple techniques
has enormous utility for security.
Consider
, which displays data
gathered on the number of different ser-
vices running on hosts within a network.
The policy on this network is to use
Microsoft's Terminal Services for remote
administration, and therefore the two
installations of pcAnywhere and the nine
installations of VNC that were detected
are policy violations that need to be
investigated then corrected. Running
pcAnywhere or VNC is not a security
“vulnerability” per se, but remote admin-
istration software certainly has a security
implication. That is the difference
between looking for specific vulnerabili-
ties and gathering general data on the
network.
As a further example, the IRC server
that was found on the network would
probably raise the eyebrow of most secu-
rity practitioners.
Note how simple it is to perform this
analysis, in contrast to having to wade
through hundreds of pages of vulnerabil-
ity assessment report. If a patch man-
agement solution is being used to detect
weaknesses in the patch status of hosts,
then this is the type of data that it is
valuable to collect across the network.
This is not traditional vulnerability
assessment data, but rather foundational
data about the network.
of operating system types found
within a particular network. Again,
this data was collected using simple
network information gathering
techniques.
This network employs both Linux and
Windows machines as its corporate stan-
dard. We can therefore say that the
detection of a device running OpenBSD
warrants investigation. Similarly, it
would be valuable from a security per-
spective to investigate the two devices for
which there was no fingerprint match.
An all-Linux organization might worry
about the presence of a Windows 95
machine on its network (and vice-versa,
of course).
This approach is well-suited for
detecting the decay in security that
computers tend to suffer over time.
Most businesses employ a standard
build for desktop and server machines
to reduce complexity and increase ease
of management, but day-to-day admin-
istrative activities can negatively impact
that base level of security. Temporary
administrative accounts are created but
then forgotten; services such as file
transfer are added for ad hoc purposes
but not removed, and so on. A vulner-
ability scanner is overkill for detecting
this kind of “policy drift”. By employ-
ing more simplistic network informa-
tion gathering techniques, the run time
of a scan can be reduced, as can the
impact on network bandwidth. The
duration of the information gathering
loop is shortened, and this allows
results to be provided quicker,
which itself reduces risk by allowing
remediation activities to be carried
out sooner.
Conclusions
Patch management technologies and
processes now deliver to businesses
the core capability of traditional net-
work vulnerability assessment
tools; namely, the identification of vul-
nerabilities that are present due to miss-
ing patches. Patch management
solutions can be used to accomplish
this task by identifying the delta
between the set of patches for
known vulnerabilities and the current
patch status of hosts within the
environment.
For network-wide vulnerability assess-
ment, the question that businesses need
to ask is: what data is it still valuable to
gather across the network? There is
little value in employing a noisy, band-
width-consuming network vulnerability
scan to interrogate production
systems with an ever-increasing
number of vulnerability checks, when
“
what data is
still valuable to
gather across
the network?
”
os
count
HP embedded
26
Cisco embedded
33
Linux
42
Windows
553
OpenBSD
1
No match
2
Table 2: Number of operating systems found in a particular network
Network Security
April 2005
10
CRYPTOGRAPHY
patch status data is already being
collected through patch management
activities.
Employing simple network
information gathering techniques in
this supplementary role is easier, takes
less time, has less impact on network
bandwidth, does not require a
constantly updated set of vulnerability
“checks”, and provides more intuitive
results.
About the author
Andrew Stewart is a Senior Consultant with
a professional services firm based in Atlanta,
Georgia.
References
Chan (2004), “Essentials of Patch
Management Policy and Practice”,
Available:
Fyodor (1997), “The Art of Port
Scanning”, Phrack Magazine, Volume 7,
No. 51, September 01, 1997.
Fyodor (1998), “Remote OS detection
via TCP/IP Stack FingerPrinting”,
Phrack Magazine, Volume 9, No. 54,
25th December, 1998.
PM (2005), Mailing list archive at
http://www.patchmanagement.org
Chinese infosec research efforts are fixat-
ed on cryptography and researchers are
already producing breakthroughs. A
group of researchers from Shandong
University in China stunned the estab-
lished crypto community at the RSA
conference in February by breaking the
integral SHA-1 algorithm used widely in
digital signatures. This SHA algorithm
was conceived deep within the womb of
the US National Security Agency's cryp-
tography labs. It was declared safe until
2010 by the US National Institute of
Standard's and Technology (NIST ). But
this illusion was shattered last month.
Even more proof of the hive of crypto
activity in China is that 72% of all cryp-
tography papers submitted to the
Elsevier journal, Computers & Security
last year hailed from China and Taiwan.
And cryptography papers accounted for
one third of all the IT security research
submitted to the journal.
The Chinese are determined to get
into the subject, says Mike Walker,
head of Research & Development at
Vodafone, who studied cryptography at
Royal Holloway College, London. "If
you attract the best people from one
fifth of the world's population, you are
going to sooner or later make a big
impression." Walker would like to see
more young people venture into cryp-
tography in the UK. He believes the
general decline in interest in science
and maths is to the detriment of the
country.
But no such lack of interest is evident
in China. And the achievement in crack-
ing the SHA-1 hash function is an
earthquake of a result. "The breakage of
SHA-1 is one of the most significant
results in cryptanalysis in the past
decade," says Burt Kaliski, chief scientist
at RSA Security. "People didn't think
this was possible."
Shelf-life
"Now there is no doubt that we need a
new hash function," says Mette
Vesterager, chief executive officer at
Cryptico. Vesterager says a competition
will probably be launched to get a new
replacement for SHA-1. Such a competi-
tion generated the Advanced Encryption
Standard (AES), from two Belgians in
2000 to replace the Data Encryption
Standard (DES). DES was published in
1977 and had 72,000,000,000,000,000
possible key variations, making it diffi-
cult to break.
NIST have now taken DES off the
shelf, however. No such retirement plan
has been concocted for SHA-1 yet. As of
yet the outcome for the broken algo-
rithm is still undecided. But Fred Piper,
at Royal Holloway says that people will
migrate away from it in the next year or
so if the Chinese research is proven. In
Sarah Hilley
A newly emergent country has begun to set the pace for
cryptographic mathematicians…
Crypto race for
mathematical
infinity
Sarah Hilley
“
It is a race
between
mathematicians
and
computers
”
“
The Chinese are
determined to
get into the
subject
”
April 2005
Network Security
11
BIOMETRICS
Biometrics is often said to be a panacea
for physical and network authentica-
tion. But there are some considerable
problems with the technology, some of
which can have a major impact on the
security posture of the implementing
organization.
At present, the cost of implementation
means that relatively few companies are
using biometric technologies to authenti-
cate identities. As biometric technologies
become less costly, many network
administrators will find themselves hav-
ing to deal with a comparatively ill-
understood series of authentication tech-
nologies. Ironically some may well
expose the systems they are responsible
for to an increased level of risk.
In the rest of the article the range of
biometric technologies on the market
together with the risks and true costs of
implementation that are often ignored
by vendors and politicians alike will be
discussed.
The search begins
From the beginning, computer and net-
work security researchers have sought an
alternative to the unique identifier,
which is currently the most widely used
method of authenticating a user to an IT
service. Typically this is a password and
username combination. However experi-
ence has shown that this mechanism
consistently fails to prevent attacks, as a
knowledgeable attacker can employ a
range of methods to circumvent this
layer of protection.
This model of authentication has
been supplemented by multi-factor
authentication mechanisms that are
based on something the user knows (e.g.
Biometrics: the eye
of the storm
By Mike Kemp, technical consultant, NGS
Software
For the last few years vendors and politicians
alike have touted biometrics technology as an
invaluable, even preferred, approach to secure authentication of
identity. However, it presents both the end users of the technology
and those responsible for its implementation with a number of
challenges.
Mike Kemp
addition the Chinese attack has reper-
cussions on other hash algorithms such
as MD5 and MD4.
Down to earth
The breakage of SHA-1 is not so dra-
matic in the humdrum application of
real-life security through, however. On a
practical level, Kaliski rates it at a two
out of 10 for impact, even through it is
widely used. But cryptographers have to
think ahead in colossal numbers to keep
up with the leaps in computing power.
According to Moore's law, computers
keep getting faster at a factor of 2 every
18 months.
Cryptographers deal with theoretical
danger. They bend and stretch the
realms of mathematics and strive to cre-
ate algorithms that outlive computing
power and time. It is a race - a race
between mathematicians and computers.
Fortunately the crack of algorithms like
SHA-1 doesn't yet affect us mere mor-
tals, who unknowingly avail of crypto to
withdraw money from the ATM on a
Saturday night.
This is thanks to cryptographers
thinking in a different time, a time that
is set by the power of computation.
This power isn't here yet to make the
crack of SHA-1 realistic outside a
research environment.
As cryptography is used in one and a
half billion GSM phones in the world,
and it authenticates countless computer
users, devices, transactions, applications,
servers and so on, this is good news. It
means that we don't have to worry
about underlying algorithms being
attacked routinely like software vulnera-
bilities, for example. The dangers are
much more distant. However side chan-
nel attacks must be watched out for,
warns Kaliski, which target the imple-
mentation of cryptography. Piper recom-
mends that keys have to be managed
properly to guard against such loopholes
in implementation.
Big computers
Governments have historically been
embroiled in mathematical gymnastics
even before cryptography became so
practical. The British famously cracked
the German Enigma code in World War
II. And American Navy cryptanalysts
managed to crack the Japanese code,
Purple, in 1940. What governments can
and can't break these days, though, is
very much unknown.
"The AES algorithm is unbreakable
with today's technology as far as I'm
aware," says Royal Holloway's Piper. So
far NIST hasn't even allocated a 'best
before' date for the decease of AES. The
AES 128 bit key length gives a total of
an astronomical 3.4 x (10^38) possible
keys. But if law enforcement can't break
keys to fight against terrorism, intelli-
gence is lost, warns Piper. However, peo-
ple wonder 'what can the NSA do?', says
Vesterager, and 'how big are their com-
puters?' But the general opinion is that
AES was not chosen because it could be
broken. Time will show, however, she
adds.
And with China pouring large
amounts of energy into studying the lan-
guage of codes and ciphers, the NSA
may want even bigger computers.
Network Security
April 2005
12
a password), something the user has (e.g.
a token), and something the user is
(biometrics).
As has been widely discussed ,
although popular, password authentica-
tion is often associated with poor pass-
word policies, and management strate-
gies that don’t work. Many network
administrators have wrestled with bal-
ancing password authentication and
password policies against account user
needs or demands. Too many know how
far they have had to compromise security
in order to service users.
Token of affection?
The use of token-based technologies such
as SecureID tokens, smart cards and digi-
tal certificates is becoming widely accept-
ed, not only in the workplace, but out-
side as well. Beginning in October 2003
the UK commenced a roll out of Chip
and PIN authentication methods for
transactions based on bank and credit
cards. The primary aim was to combat
the growing rate of card fraud based on
the manipulation of magnetic strips or
signature fraud. So far over 78 million
Chip and Pin cards are in common use
in the UK, more than one for every man,
woman and child on the island.
Token-based authentication is not
without its downside, however. In fact, it
is far from a panacea with regards the
security of networks, or indeed one’s per-
sonal finances. A number of attack vec-
tors exist for both the use of SecureIDs
and the like. Certainly, the number and
value of card-based frauds appears to
have risen since Chip & PIN was intro-
duced. Recent research, still ongoing, is
expected to expose a number of flaws
within the use of Chip and PIN authen-
tication mechanisms in a variety of com-
mon environments.
PIN-pushers
The push towards biometrics comes
from a variety of sources. The financial
industry in particular is resolved to
reduce fraud based on stolen identities,
which, according to Accenture, the man-
agement consultancy, now costs con-
sumers and banks $2 trillion a year.
National security agencies in various
countries, led by the US immigration
authorities, are also seeking reliable
unique authentication systems as part of
the ‘war on terror’. Biometrics is the as-
yet unfulfilled promise of the third pillar
of authentication mechanisms.
At the network level, biometrics may
well enable network administrators to
increase the security of their network
environments. There are a number of
implementation and security issues that
are often overlooked in the push towards
new methods of authentication.
Methods of biometric
access
As has been outlined earlier, biometrics
is a means of authenticating an
individual's identity using a unique
personal identifier. It is a highly
sophisticated technology based on
scanning, pattern recognition and pat-
tern matching. At present it remains
one of the most costly methods of
authentication available.
Several different technologies exist
based on retinal scans, iris scans, facial
mapping (face recognition using visible
or infrared light, referred to as facial
thermography), fingerprinting (including
hand or finger geometry), handwriting
(signature recognition), and voice
(speaker recognition).
For biometrics to be effective, the
measuring characteristics must be pre-
cise, and the false positives and false neg-
atives minimised.
When a biometric authentication sys-
tem rejects an authorised individual this
is referred to a Type 1 error; a Type 2
error occurs when the system accepts an
impostor. The effectiveness of a biomet-
ric solution can be seen in the Crossover
Exchange Rate (CER). This is a per-
centile figure that represents the point at
which the curve for false acceptance rates
crosses over the curve for false rejection
rates. Depending upon the implementa-
tion of the chosen biometric technology,
this CER can be so high as to make
some forms unusable for an organisation
that wishes to adopt or retain an aggres-
sive security posture.
Space invaders
Some forms of biometrics are obviously
more invasive of one’s personal ‘space’
than others. Fingerprinting, for instance,
has negative connotations because of its
use in criminal detection. As such, some
biometrics may well meet with user resis-
tance that company security officers will
need to both understand and overcome.
In 2005, London’s Heathrow airport
introduced plans to conduct retinal scans
in a bid to increase security, and increase
the efficiency of boarding gates. At pre-
sent there are no figures on user accep-
tance of the scheme, which is currently
voluntary. However, as retinal scans are
among the most invasive of biometric
technologies it would be surprising if the
voluntary acceptance rate is high enough
to justify either the expense or efficiency
improvement of the solution.
Print sprint
Traditionally biometrics is commonly
associated with physical security.
However there is a growing shift
towards adopting biometrics as a mech-
anism to secure authentication across a
network. A number of fingerprint read-
ers are currently available that can be
deployed for input to the authentica-
tion system. These are now cheap and
reliable enough for IBM to include one
in some of its latest laptop computers as
the primary user authentication device.
There is also on-going research to
reduce the cost and improve both the
accuracy and security other biometric
methods such as facial maps and iris or
retinal scans. Judging by the develop-
ments in the field of biometrics in the last
15 years it can only be a matter of time
before everyone can afford the hardware
for biometric network authentication.
Accuracy and security?
As has already been discussed the bio-
metrics approach to network authentica-
tion has much promise; however, it is an
as yet unrealised potential. One reason is
that it is laden with a variety of short-
comings that need to be fixed prior to its
widespread adoption as an authentica-
tion mechanism.
BIOMETRICS
April 2005
Network Security
13
BIOMETRICS
One of the touted benefits of biomet-
rics is that biometric data is unique, and
this uniqueness makes it difficult to steal
or imitate. One often-overlooked prob-
lem with the biometric approach is that,
unlike other forms of authentication,
they are anything but discreet. Unlike
the traditional password-based model, or
even the token-based approach (e.g.
Chip and PIN) no biometric approach
relies upon something the user holds as
secret. Indeed in all the biometric tech-
nologies currently available potential
attackers can see exactly what is going
on. Obviously, this makes them poten-
tially vulnerable.
Attack vectors
When evaluating biometrics network
administrators should consider possible
attack vectors. These fall into two dis-
tinct classes, namely:
• Physical spoofing, which relies on
attacks that present the biometric
sensor (of whatever type) with an
image of a legitimate user.
• Digital spoofing, which transmits
data that mimics that of a legitimate
user. This approach is similar to
the password sniffing and replay
attacks that are well known and are
incorporated in the repertoire of
many network attackers.
In 2003, two German hackers,
Starbug and Lisa, demonstrated a range
of biometric physical spoofing attacks at
the Chaos Computer Camp event. Their
attacks relied upon the adaptation of a
technique that has long been known to
many biometrics vendors. In the original
attack vector an attacker could dust a
fingerprint sensor with graphite powder,
lift the fingerprint, and then subsequent-
ly use it to gain entry.
The 2003 attack showed it could cre-
ate a 'gummy finger' using a combina-
tion of latex, photo imaging software
and graphite powder. Although this
method may seem somewhat far-
fetched, it can be used to bypass a num-
ber of available fingerprint biometric
devices. Indeed, in 2002, Japanese
researcher Tsutomo Matsumoto was able
to fool 11 biometric fingerprint readers
80% of the time using 'gummy fingers'.
Worse news came in 2004, when
researchers revealed that some finger-
print readers could be bypassed merely
by blowing gently on them, forcing the
system to read in an earlier latent print
from a genuine user.
Attacks are not limited only to finger-
print readers (as found in the current
range of network access devices); both
face and iris scanners can be spoofed
successfully. In the case of the former, a
substitute photograph or video of a
legitimate user may be able to bypass
systems; with regards to iris scanners, a
photograph of the iris taken under dif-
fused lighting and with a hole cut for
the pupil can make for an effective
spoofing stratagem.
If compromised biometric devices are
a conduit into a network, it may be pos-
sible to manipulate stored data, thus
effectively bypassing all security policies
and procedures that are in place.
Attack on all sides
As has been outlined, biometric technolo-
gies are far from risk-free. Many (if not
all) are susceptible to both physical and
logical digital attack vectors. The reasons
for these shortcomings are many, includ-
ing a potential ignorance about security
concerns on the manufacturer's part, a
lack of quality control, and little or no
standardisation of the technologies in use.
There is also the sometimes onerous
and problematic process of registering
users who may not embrace the use of
biometrics, and who may start quoting
passages from the Human Rights Act.
When you think about implementing
biometric technologies remember that
they do not yet measure perfectly, and
many operational and security chal-
lenges can cause them to fail, or be
bypassed by attackers. Presently there is
not enough hard evidence that shows
the real levels of failure and risk associ-
ated with the use of biometric authenti-
cation technologies. It would be a brave
administrator indeed that chose to
embrace them blindly and without a
degree of external coercion, such as a
change in the legislation.
Goodbye to passwords?
Biometric technologies have the poten-
tial to revolutionise mechanisms of net-
work authentication. They have several
advantages, such as users never need to
remember a password, and more
resilience against automated attacks and
conventional social engineering attacks.
However, the market for such devices is
so new, and the amount of clear statisti-
cal research data as to its cost and bene-
fits is Spartan.
Most large companies can probably
afford to implement them. But doing so
may have the undesirable side effect of
actually increasing their exposure to risk.
In particular, the lack of standardisation
and quality control remains a serious
and grave concern.
In the coming years, biometrics may
improve as an authentication technolo-
gy, if only because politicians and fraud-
sters are currently driving the need for
improvements. At the present level of
technical understanding and standardisa-
tion, and many signs of user resistance,
network administrators who voluntarily
introduce the technology may find
themselves on the bleeding edge, rather
than the leading edge.
Network administrators need to ques-
tion closely not only the need for bio-
metrics as a network authentication and
access mechanism, but also the levels of
risk they currently pose to the enter-
prise. For most, the answer will be to
wait and see.
About the author
Michael Kemp is an experienced technical
author and consultant specialising in the
information security arena. He is a widely
published author and has prepared numer-
ous courses, articles and papers for a
diverse range of IT related companies and
periodicals. Currently, he is employed by
NGS Software Ltd where he has been
involved in a range of security and d ocu-
mentation projects. He holds a degree in
Information and Communications and is
currently studying for CISSP certification.
PROACTIVE SECURITY
Network Security
April 2005
14
Vendor bandwagon
Nevertheless the vendors do seem
to have decided that proactive security
is one of the big ideas for 2005, and
there is some substance behind the
hype. Cisco for example came out
with a product blitz in February 2005
under the banner of Adaptive Threat
Defence. IBM meanwhile has been
promoting proactive security at the
lower level of cryptography and digital
signatures, while Microsoft has been
working with a company called
PreEmptive Solutions to make its code
harder for hackers to reverse engineer
from the compiled version. The dedi-
cated IT security vendors have also
been at it. Internet Security Systems
has been boasting of how its customers
have benefited from its pre-emptive
protection anticipating threats before
they happen. And Symantec has
brought to market the so-called digital
immune system developed in a joint
project with IBM.
Unreactive
These various products and strategies
might appear disjointed when taken
together, but they have in common the
necessary objective of moving beyond
reaction, which is no longer tenable in
the modern security climate. The crucial
question is whether these initiatives real-
ly deliver what enterprises need, which is
affordable pre-emptive protection. If the
solutions extract too great a toll on
internal resources through need for con-
tinual reconfiguration and endless analy-
sis of reports containing too many false
positives, then they are unworkable.
Proactive security has to be as far as pos-
sible automatic.
On this count some progress has been
made but there is still a heavy onus on
enterprises to actually implement proac-
tive security. Some of this is inevitable,
for no enterprise can make its network
secure without implementing some good
housekeeping measures. The products
can only deliver if they are part of a
coherent strategy involving analysis of
internal vulnerabilities against external
threats.
Indeed this is an important first step
towards identifying which products are
relevant. For example the decline in
perimeter security as provided by fire-
walls has created new internal targets
for hackers, notably PCs, but also
servers that can be co-opted as staging
posts for attacks. There is also the risk
of an enterprise finding its servers or
PCs exploited for illegal activities such
as peer-to-peer transfer of software,
music or even video, without its knowl-
edge. Identifying such threats and
putting appropriate monitoring tools in
place is an important first step along
the pre-emptive path.
Stop the exploitation
However some of the efforts being
made will benefit everybody and come
automatically with emerging releases of
software. Microsoft’s work with
PreEmptive Solutions springs to mind
here, as the technology concerned is
included with Visual studio 2005.
This technology called Dotfuscator
Community Edition is designed to
make the task of reconstituting
source code from the compiled object
code practically impossible, so that
hackers are unlikely to try. Of course
the risk then becomes of the source
code itself being stolen, but that is
another matter.
Sharing private keys
The principle of ducking and weaving
to evade hackers can also be extended
to cryptography. The public key system
is widely used both to encrypt session
keys and also for digital signatures.
The latter has become a target for
financial fraudsters because if they steal
Philip Hunter
Proactive security sounds at first sight like just another marketing
gimmick to persuade customers to sign for up for yet another
false dawn. After all proactivity is surely just good practice,
protecting in advance against threats that are known about, like
bolting your back door just in case the burglar comes. To some
proactive security is indeed just a rallying call, urging IT managers
to protect against known threats, and avoid easily identifiable
vulnerabilities. All too often for example desktops are not
properly monitored allowing users to unwittingly expose internal
networks to threats such as spyware. Similarly remote execution
can be made the exception rather than the default, making it
harder for hackers to co-opt internal servers for their nefarious
ends.
Proactive security
latest: vendors wire
the cage but has
the budgie flown….
“
Proactive
security has
to be
automatic
”
Philip Hunter
PROACTIVE SECURITY
someone’s private key they can write
that person’s digital signature, thereby
effecting identify theft. But here too
risks can be greatly reduced through
pro-activity. An idea being developed by
IBM involves distributing private keys
among a number of computers rather
than just one. Then the secret key can
only be invoked, whether for a digital
signature or to decrypt a message, with
the participation of a number of com-
puters. This makes it harder to steal the
key because all the computers involved
have to be compromised rather than
just one. In practice it is likely that at
least one of the computers will be
secure at any one time – at least such is
the theory. This development comes at
a time of increasing online fraud and
mounting concerns over the security of
digital signatures.
Buglife
There is also scope for being proactive
when it comes to known bugs or vul-
nerabilities in software. One of the
most celebrated examples came in July
2002 when Microsoft reported vulnera-
bility in its SQL Server 2000
Resolution Service, designed to allow
multiple databases to run on a single
machine. There was the potential to
launch a buffer overflow attack, in
which a hacker invokes execution of
code such as a worm by overwriting
legitimate pointers within an applica-
tion. This can be prevented by code
that prohibits any such overwriting, but
Microsoft had neglected to do so within
Resolution Service. However Microsoft
did spot the vulnerability and reported
it in July 2002. One security vendor,
Internet Security Systems, was quick
off the mark, and in September 2002
distributed an update that provided
protection. Then in January 2003 came
the infamous Slammer Worm exploiting
this loophole, breaking new ground
through its rapid propagation, doubling
the infected population every 9 seconds
at its height. The case highlighted
the potential for pre-emptive action,
but also the scale of the task in distrib-
uting the protection throughout the
Internet.
Open disclosure
Another problem is that some software
vendors fail to disclose vulnerabities
when they do occur, through fear of
adverse publicity. This leads to delay in
identifying the risks, making it even
harder to be proactive. It makes sense
therefore for enterprises to buy software
only where possible from vendors that
practice an open disclosure policy. Many
such disclosures can be found on the
BUGTRAQ mailing list, but a number
of vendors, and in some cases even sup-
pliers of free software when there would
seem nothing to gain by it, hide issues
from their users. There is however a
counter argument in that public dissemi-
nation of vulnerabilities actually helps
and encourages potential hackers. But
there is the feeling now that in general
the benefits of full disclosure outweigh
the risks.
Patch it
Be that as it may the greatest challenge
for proactive security lies in responding
and distributing patches or updates to
plug vulnerabilities within ever decreas-
ing time windows. As we just saw the
Slammer worm took six months arrive,
and the same was true for Nimda. This
left plenty of time to create patches and
warn the public, which did reduce the
impact. But the window has since
shortened significantly – a study by
Qualys, which provides on-demand vul-
nerability management solutions,
reported in July 2004 that 80% of
exploits were enacted within 60 days of
a vulnerability’s announcement. In
some cases now it takes just a week or
two, so the processes of developing and
distributing patches need to be speeded
up. Ideally service providers should
implement or distribute such protection
automatically.
Conclusion
Proactive security also needs to be flexi-
ble, adapting to the changing threat
landscape. A good example is the case of
two-factor security, in which static pass-
words are reinforced by tokens generat-
ing dynamic keys on the fly. This has
been the gold standard for controlling
internal access to computer systems
within the finance sector for well over a
decade, but recently there have been
moves to extend it to consumer Internet
banking. But some experts reckon this is
a waste of money because it fails to
address the different threats posed by
Internet fraudsters. These include man
in the middle attacks which capture the
one time key as well as the static pass-
words and replay both to the online
bank. So it may be that while two-factor
security will reduce fraud through guess-
ing or stealing static passwords, the cost
of implementing it across a customer
base will outweigh the benefits, given
that vulnerabilities remain. But nobody
is suggesting that proactive security
avoids hard decisions balancing
solutions against threats and cost of
implementation.
April 2005
Network Security
15
“
Many suppliers
hide issues
from users
”
“
There have
been moves
to extend
two- factor
authentication
to Internet
banking
”
PKI
Network Security
April 2005
16
PKI
Secure messaging employing end-to-end
architectures and PKIs offer message
confidentiality through encryption, and
message authentication through digital
signatures. However, there are a number
of implementation and operational issues
associated with them.
One of the major criticisms is the
overheads involved in certificate and key
management. Typically, certificates and
keys are assigned a lifetime of one to
three years, after which they must be
replaced (rekeyed). A current trend is to
employ a rigorous semi-manual process
to deploy initial certificates and keys and
to automate the ongoing management
processes. For the initial issuance, it is
vital to confirm the identity of the key
and certificate recipients; especially
where messages between organizations
are to be digitally signed.
Business partners must have trust in
each others’ PKIs to a level commensu-
rate with the value of the information to
be communicated. This may be deter-
mined by the thoroughness of the
processes operated by the Trust Centre
that issued the certificates, as defined in
the Certificate Policy and Certificate
Practice Statement.
The organisation’s corporate directory
plays a critical role as the mechanism for
publishing certificates. However, corpo-
rate directories contain a significant
amount of information which may
create data-protection issues if published
in full. Secondly, corporate directories
usually allow wildcards in search criteria,
but these are unwise for external connec-
tion as they could be used to harvest e-
mail addresses for virus and spam
attacks. Furthermore, organizations may
publish certificates in different locations.
Dedicated line and
routing
The underlying idea for this alternative
to a fully blown PKI is to transmit mes-
sages on a path between the participating
organizations that avoids the open
Internet. There are two major options:
A dedicated line between the
involved companies
With this option all messages are nor-
mally transmitted without any protec-
tion of content. The level of confiden-
tiality for intracompany traffic thus
becomes the same for the intercompany
traffic and for many types of informa-
tion that may be sufficient. Depending
on bandwidth, network provider and
end locations, however, this option may
be expensive.
A VPN connection between
participating companies
Such a connection normally employs the
Internet, but an encrypted, secure tunnel
on the network layer is established
between the networks of participants.
Thus all information is protected by
encryption. An investment to purchase
or upgrade the network routers at the
endpoints of the secure tunnel might not
be insignificant.
Most of the work to implement such
solutions lies in establishing the network
connection, and a dedicated line may
have a considerable lead time. The same
applies for new network routers as end-
points of a VPN.
Gateway to gateway
encryption using
Transport Layer Security
(TLS)
Internet email messages are vulnerable to
eavesdropping because the Internet
Simple Message Transfer Protocol
(SMTP) does not provide encryption. To
protect these messages, servers can use
TLS to encrypt the data packets as they
pass between the servers. With TLS, each
packet of data is encrypted by the send-
ing server, and decrypted by the receiving
server. TLS is already built into many
messaging servers, including Microsoft
Exchange and IBM Lotus Domino, so
that implementation may simply involve
the installation of an X.509 server certifi-
cate and activation of the TLS protocol.
The downside is that data is protected
only in transit between servers that sup-
port TLS. TLS does not protect a mes-
sage at all stages during transport, unless
TLS is implemented as a service in all
the involved instances.
Gateway to gateway
encryption using S/MIME
Gateways
An obstacle to end-to-end PKI is the
burden of managing certificates. Also,
once encrypted, messages cannot be
scanned for viruses, spam, or content.
Gateways that use the
Secure/Multipurpose Internet Mail
Extensions (S/MIME) protocol to
encrypt and decrypt messages at the
organizational boundary can address
these issues. S/MIME gateways use
Management
aspects of secure
messaging between
organizations
Roger Dean, Head of Special Projects, eema
Electronic messaging is vulnerable to eavesdropping and imperson-
ation, and companies that do not protect sensitive information lay
themselves open to significant risk. Here we take a short glimpse at
some of the issues associated with Public Key Infrastructure (PKI),
and some less expensive options.
Roger Dean
RFID
But of these three technologies, RFID
is probably the least understood and
most feared by the public at large.
Consumers are afraid of their buying
habits being tracked. Travellers are
concerned about the privacy issues
of RFID in passports. And businesses
are worried that the current state of
the technology is not sufficient to
keep hackers at bay. Ultimately, RFID
has the capability to change the face
of supply chain management and
inventory control and we need to be
prepared for that.
RFID Basics
RFID (Radio Frequency IDentification)
has been around for decades. Initially
used for proximity access control,
RFID has evolved over the years to be
used in supply chain tracking, toll bar-
rier control, and even protecting auto-
mobiles. The cost of the chips used for
RFID are now as low as 0.20USD with
readers costing as little as 30USD,
making large scale deployments more
cost effective.
There are several types of RFID tag.
The most common and simple is a pas-
sive tag. Passive RFID tags receive their
energy from a remote RFID reader. The
tag is able to focus the radio frequency
energy from the transmitting reader and
uses the generated electrical impulse to
power the onboard chip.
April 2005
Network Security
17
RFID: misunderstood or
untrustworthy?
Bruce Potter
It seems that everywhere you look, wireless security is in the news.
WiFi networks are being deployed in homes and businesses at an
astounding rate. Bluetooth is being in integrated into all manner of
device from cell phone to laptop to automobile. And now RFID tags
are starting to show up in some retail stores and gaining acceptance
in for use in supply chain management.
public and private keys known as
domain certificates to encrypt and sign
messages that pass between domains.
They have the same format as those used
in desktop-to-desktop S/MIME message
encryption, except that the certificates
are issued to domains, not individual
users. Messages are signed and encrypted
only while in transit between the
S/MIME gateways.
An S/MIME gateway can co-exist with
unencrypted SMTP messages and with
end-to-end S/MIME encryption; it can
send and receive unencrypted and
unsigned messages to/from any e-mail
domain; and it can receive messages
signed or encrypted with conventional,
desktop-to-desktop S/MIME. It will not
decrypt the message or verify the signa-
ture, and it will deliver the message to
the recipient's mailbox with the signature
and/or encryption intact. However, it
cannot currently sign or encrypt mail
that is sent to a user in a domain that
does not have an S/MIME gateway.
Pretty Good Privacy (PGP)
The OpenPGP and PGP/MIME proto-
cols are based on PGP and rely on
MIME for message structure. Today, a
specialised S/MIME client can’t normal-
ly communicate with a PGP client,
although that may change. PGP has
been described as a good example of
what PKI is; but it enables the user to
scale the PKI implementation from indi-
viduals up to several thousand users. It
comprises a number of products that
can be implemented incrementally
according to requirement. With PGP
there is no reason to hesitate to imple-
ment and make use of secure messaging
capability because of cost or complexity:
it’s perfectly possible for the small to
medium sized company ) to create an
environment which is functional, inex-
pensive and easy to manage.
Attachment, encryption
and compression
A number of products for document
storage and communication are supplied
with different types of confidentiality
such as MS/Word, MS/Excel and the
Adobe Family. Another collection is rep-
resented by file compressing tools. These
allocate the smallest possible storage area
for any number of files gathered, and are
often equipped with advanced encryp-
tion capability. For example, the latest
version of WinZip is supplied with 256
bit AES encryption.
There are some limitations with com-
pression tools, in the area of secure mes-
saging. Key handling is cumbersome and
if used extensively it may cause trouble.
Also, compression tools can’t normally
protect the actual message, just the
attached file(s); and the password must
be delivered to the recipient separately –
preferably by phone. File compression is
therefore a temporary or special solu-
tion, to be used with discernment.
More information
More information can be found in the
full report available from EEMA, a large
multi-national user organization.
EEMA is exhibiting at Infosecurity
Europe 2005, which is held on the 26th
– 28th April 2005 in the Grand Hall,
Olympia in London.
More details:
“
A major criti-
cism of PKI is
the overheads
RFID
These RFID chips are very simple and
may have as few as 400 logic gates in
them; they can basically be thought as a
simple memory chip. The chip then
responds with a short burst of informa-
tion (typically an ID unique to the chip)
that is transmitted by the antenna on the
RFID tag. The reader receives this
information and can then act upon it.
Passive tags can be manufactured thinner
than a piece of paper and have been
integrated into everything from shipping
labels to clothing.
The other types of RFID involve using
a battery for some part of the RFID
transaction. Semi-passive tags use a
small onboard battery to power the chip,
but rely on the energy from the reader
for powering the tag’s antenna for trans-
mission. Semi-active tags turn this con-
cept around. These tags use the battery
for powering the antenna but the chip
relies on the RF energy from the reader.
An Active tag uses a battery for both the
chip and the transmission of data on the
antenna. While the amount of memory
in the non-active tags is limited to gener-
ally a few hundred bytes (if that), an active
tag can have kilobytes (if not megabytes)
of memory. The drawback of any of the
powered tags is that eventually the battery
dies and the tag becomes useless.
Security concerns
There are a wide variety of security con-
cerns with RFID tags. One concern of
interest is the ability to track the location
of a person or asset by an unintended
actor. While the RFID specifications
generally deal with short ranges (a few
inches to a few feet) between the readers
and the tags, specialized equipment can
pick up a signal from an RFID tag much
farther away.
This is a similar problem to that with
wireless LAN’s. Normally a WLAN is
only effective for a user within 100m or
so. But an attacker with powerful anten-
nas can be more than 10km away and
still access the network. RFID tags fall
prey to the same problem; an attacker can
be two orders of magnitude farther away
than intended and still read data. For
instance, if an RFID tag is designed to be
read at 1 foot, an attacker may be able to
be 100 ft away and still interact with it.
RFID tags typically only contain a
unique number that is useless on its
own. The idea is that the reader inter-
faces with some backend system and
database for all transactions. The data-
base stores the information that ties the
unique ID to something of interest. For
instance, the database knows that ID
1234 is attached to a bar of soap. An
attacker reading RFID’s would not
know, without access to the database,
what ID 1234 is.
Unfortunately, we cannot always
assume that an attacker will not have
access to the backend database. As the
last decades of network security have
demonstrated, backend systems are often
all too easy a target for an attacker. And
once the database tying the unique ID’s
to physical items has been compromised,
it would be nearly impossible to retag all
items in response.
The vast majority of RFID tags on the
market require no authentication to read
the information on them. This allows
anyone, an attacker or even just a com-
petitor, to read the data on an RFID
chip. Further, many tags have the capa-
bility to write information to the chip
without authentication. This is especial-
ly troubling for enterprises relying on
RFID for things like supply chain man-
agement. An attacker could theoretically
overwrite values on the RFID tags used
by the enterprise, thereby wreaking
havoc with their RFID system.
Killing a tag
One of the primary privacy concerns
regarding RFID is the ability for a con-
sumer to be tracked once they have
bought an item that contains an RFID
tag. To overcome this fear, vendors and
enterprises have devised various ways to
attempt to terminate the tag.
One method of terminating a tag used
for retail sales is to simply change the
info on the tag to random data when the
item is sold. That way a store’s security
system knows the item has been sold and
does not sound an alarm when the item
leaves. Further, with random data, the
idea is the RFID information can no
longer be tied to a value in the database.
The problem with this method is that
there is still an RFID chip active in the
item, even if the data on the chip is ran-
dom. An attacker is still able to physi-
cally track the tag, and even store data
on it if they so desired. So some tags
also have the concept of a KILL com-
mand. When a tag receives a KILL
command, it ceases to respond to
requests from RFID readers. A KILL
command actually terminates the RF
capability of the chip.
While this is good from a privacy per-
spective, it poses a massive security risk.
The KILL command is protected by a
password on the chip. Unfortunately,
RFID chips are very primitive. So many
enterprises have all their RFID chips cre-
ated with the same KILL password.
Further, there is no capability to change
the KILL password once a chip has been
fabricated. An attacker with knowledge
of an enterprise’s KILL password can
potentially terminate all the RFID’s they
are within range of. In a short period of
time, an attacker can render hundreds of
thousands of tags completely useless.
Parting shot
As RFID tags get cheaper, they will be
integrated into more and more systems.
While an incredible tool for supply chain
management and asset tracking, RFID
tags have more in common with 20 year
old memory card technologies than con-
temporary wireless systems. Unlike old
memory cards, RFID tags are accessible
from a great distance given advanced
wireless equipment. Attacks against
RFID tags are trivial and privacy con-
cerns are everywhere. To date, these con-
cerns have not outweighed the advantages
to businesses in need of RFID technology
and the rate of adoption is accelerating.
Until new standards and more advanced
chips can be made, RFID tags will
remain easy targets for attackers deter-
mined to cause havoc or commit crimes.
About the author
Bruce Potter is currently a senior security
consultant at Booz Allen Hamilton.
Network Security
April 2005
18
SNORT
Although the security marketplace has
no shortage of good, reliable intrusion
detection systems, one open source prod-
uct still manages to hold a very promi-
nent position in the security manager's
arsenal - Snort.
Snort is one of the most widely used
Intrusion Detection System (IDS) prod-
ucts currently on the market (
). Snort is a command
line intrusion detection program based
on the libpcap packet capture library
(
). It is extreme-
ly versatile and configurable, and runs on
Linux, most UNIX platforms, and
Windows.[
] According to DataNerds,
"Snort is a lightweight network intru-
sion detection system, capable of per-
forming real-time traffic analysis and
packet logging on IP networks. It can
perform protocol analysis, content
searching/matching and can be used to
detect a variety of attacks and probes,
such as buffer overflows, stealth port
scans, CGI attacks, SMB probes, OS fin-
gerprinting attempts, and much more.
Snort uses a flexible rules language to
describe traffic that it should collect or
pass, as well as a detection engine that
utilizes a modular plug-in architecture.
Snort has a real-time alerting capability
as well, incorporating alerting
mechanisms for syslog, a user specified
file, a UNIX socket, or WinPopup mes-
sages to Windows clients using Samba's
smbclient." (DataNerds, 2002).
While the program is very robust and
versatile in its ability to detect more than
1200 different types of real-time scans
and attacks, it is nonetheless somewhat
tedious and difficult to use. Snort
employs a rather cryptic command-line
interface and all program configurations
are done by manually editing the one
configuration file - snort.conf. Snort
outputs its detected scans and probes
into an unordered hierarchical set of
directories and text files. Its output how-
ever can be made more organized and
structured by employing a commonly
used database plug-in (add-on) and
directing the output to one of several
supported SQL database products, such
as MySQL (
),
PostgreSQL (
), Oracle (
or MS SQL Server (
).
Because of the tediousness of working
with a command-line version of Snort,
the legion of Snort devotees and devel-
opers have created a near cottage indus-
try around developing and improving
front-end GUI interfaces to complement
Snort. This improvement in the user
interface has greatly expanded the use of
Snort to non-developers since it not only
makes this powerful program more
accessible but also more efficient and
easier for non-developers to understand
the alerts generated by the IDS (
These interfaces can mainly be divided
into two broad categories - the first cate-
gory are those add-ons that organize
Snort's output into a structured set of
reports and attack trend indicators, and
the second category of add-ons are those
that are designed to ease the tediousness
of configuring Snort and maintaining its
vast signature ruleset.
Most of the front-end interfaces were
originally designed to operate on a
Linux/UNIX platform, but many have
also been ported to operate in
Windows. And there is even a port of
Snort to the Mac OSX platform that
uses the now familiar Mac OSX GUI
interface.
Who uses Snort?
In this study a population of 195 network
security managers from US colleges and
universities were surveyed. The choice of
colleges and universities was arbitrarily
selected from a fairly even distribution of
40 states and the District of Columbia
listed in the total of the 6814 colleges and
universities in the Yahoo search directory
(By Region > U.S. States).
The sample size was comprised of
27.2% of this population. The survey
was an attempt to determine whether
network security administrators use
Snort and any of the available add-on
products and what factors contributed to
their decision to use the particular add-
on selected.
In the sample, 17.0% had small net-
works comprised of less than 1000
workstations, while 83.0% had large
networks comprised of more than 1000
workstations.
The network administrators were first
asked "Do you use the Snort Intrusion
Detection system?". In the sample,
45.3% of network security administra-
tors surveyed stated they use Snort and
April 2005
Network Security
19
Network security managers'
preferences for the Snort
IDS and GUI add-ons
Galen A. Grimes, Penn State McKeesport, 4000 University Drive,
McKeesport, PA 15132, USA
Snort, one of the most widely used Intrusion Detection System (IDS)
products on the market, is extremely versatile and configurable, and
runs on Linux, most UNIX platforms, and Windows. Snort is a fairly
difficult product to use fully because of the stark command line inter-
face and the un-ordered scan and attack data. The difficulty associat-
ed with its command line interface, however, has spawned a near
cottage industry among Snort developers who have created a myriad
of graphical user interfaces (GUIs) in an attempt to provide an easier
means for network security managers to fully configure and use
Snort. This analysis will also look at which Snort add-on products are
favoured by network security managers.
SNORT
Network Security
April 2005
20
the vast majority of the network security
managers who use Snort use it on a
Linux platform (78.3%).
Of those security managers who do
not use Snort, they gave the following
reasons why:
• Don't use any IDS system (44.8%).
• Snort is not as useful as a commercial
IDS product (24.1%).
• Don't use open source (6.9%).
• Snort installation/setup procedure
too complicated (6.9%).
• Did not have time to install/setup
Snort (6.9%).
• Snort not robust enough (3.4%).
• Use IPS instead (10.3%).
Interfaces for organizing Snort's
output
It is not surprising that the vast majority
of the front-end interfaces for Snort are
designed to help users organize and dis-
play Snort's voluminous output into
coherent reports. Even on a small to
medium-sized network or network seg-
ment it is not unusual for Snort to gener-
ate between 15 and 20 thousand legiti-
mate alerts each month. Examples of
interfaces are as follows:
• Analysis Console for Intrusion
Databases (ACID) -
- 66.7%
of all network security managers who
use Snort say they also use ACID
• PureSecure -
security managers use it
• SnortFE - Not used by any survey
respondents.
• Snortsnarf -
. 12.5% of network securi-
ty managers use it.
survey respondents.
Interfaces for configuring Snort
Some Snort developers have concentrat-
ed on developing an easier to use Snort
configuration environment for configur-
ing Snort's network settings, preproces-
sor controls, output plug-ins and updat-
ing Snort's rules files.
This study seems to suggest that this
category of add-ons is not nearly as popu-
lar as the first category. In this study
79.2% of all network security managers
who use Snort use one or more of the
report/trend analysis add-ons (category 1)
while only 25.0% of the network security
managers who use snort use one or more
of the configuration add-ons (category2).
• IDScenter -available free at:
-
16.7% of all network security man-
agers who use Snort use and/or have
tried IDScenter.
users.pandora.be/larc/index.html
Only 8.3% of the network security
managers who use Snort say they also
use SnortCenter.
• Hen Wen (MAC OSX) - 8.2% of
the responding network managers
who use Snort both use and have
tried Hen Wen.
Conclusion
In this study it appears that as network
size increases network security managers
appear much more likely to make the
decision to include an IDS such as
Snort in their security arsenals as sug-
gested by security best practices (
). Among the security managers
who reported using Snort, 87.5%
administer large networks (>1000 work-
stations and/or host computers) and
12.5% administer small networks
(<1000 workstations and/or host com-
puters). This study also shows that the
decision to use Snort as their IDS of
choice also includes the choice of which
GUI front-end to use and overwhelm-
ingly the network security managers
represented in this study chose ACID.
This choice of Snort add-ons also sug-
gests that most security administrators
are using Snort more as an attack trend
analysis tool rather than as a real-time
intrusion indicator. This study also
shows that network security administra-
tors also strongly favor the Snort/ACID
combination in operation on a Linux
platform (78.3%). This could possibly
explain the poor showing of the use of
IDScenter, which includes ACID but
only operates on Windows OS.
The addition of a GUI interface such
as ACID, or any of the other add-ons
mentioned in this study, has been
shown in numerous other studies to
improve operator efficiency (Mann &
Schnetzler, 1986; Pulat & Nwankwo,
1987) and few will deny that the addi-
tion of GUI front-ends and report gen-
erators have made Snort a more viable
product for a larger target audience
since the interfaces make the product
more usable (
). In addition to more user-friend-
ly interfaces many of the developer sites
are also now offering installation assis-
tance for Snort.
But the development of the variety of
GUI front ends described in this article
and the added usability they present,
mean security administrators now pos-
sess a much wider choice for how they
might want to deploy Snort-based sen-
sors on their networks. Since an IDS is a
passive device with low CPU overhead,
security managers are not limited or
restricted to the choice and number of
front-end products they can deploy and
can place any number of Snort sensors
on their network in any combination of
the front-end products previously listed.
References:
Northcutt, S., & Novak, J. (2001).
Network Intrusion Detection: An Analyst's
Handbook. Indianapolis: New Riders
DataNerds
Preece, J., Rogers, Y., & Sharp, H. (2002).
Interaction Design: Beyond Human
Computer Interaction. Hoboken, N.J.:
John Wiley & Sons, Inc.
Allen, J. (2001) CERT Guide to System
and Network Security Practices.
Indianapolis: Addison-Wesley Pearson
Education.
Redmond-Pyle, D., & Moore, A. (1995).
Graphical User Interface Design and
Evaluation. London: Prentice Hall
Note:
[1]There is also a port of Snort for the
Mac OS called Hen Wen