spacer Get news? 2011 spacer 2010 spacer 2009 spacer 2008 spacer 2007 spacer 2006 spacer 2005 spacer 2004 spacer 2003 spacer 2002 spacer 2001 spacer 2000 spacer 1999 spacer 1998 spacer 1997 spacer About spacer Contact spacer Want to help?

spacer spacer
13th International Linux System Technology Conference
September 5-8, 2006
Georg-Simon-Ohm-Fachhochschule, Nürnberg, Germany
spacer

Home spacer Program spacer Abstracts spacer Fees spacer Registration spacer Location & Accomodation spacer Keysigning Party spacer Sponsors spacer Call for Papers

Abstracts

SELinux
by Ralf Spenneberg
Tuesday, 2006-09-05 10:00-18:00 and
Wednesday, 2006-09-06 10:00-18:00

Linux is not very secure by design. Simple access controls protect the system from attacks. Mandatory Access Control (MAC) is the current solution to this problem. Up to now only two major distributions support MAC: Red Hat/Fedora uses SELinux and SUSE uses AppArmor. But the next version of Debian will also use SELinux to enhance its security. This tutorial shows you how to use and tune a given SELinux Policy. Moreover you will be able to modify the policy for your needs and develop new policies for yet unsupported applications.

About the speaker:

The Author has used Linux since 1992 and worked as a system administrator since 1994. During this time he worked on numerous Windows, Linux and UNIX systems. The last 5 years he has been working as a freelancer in the Linux/UNIX field. Most of the time he provides Linux/UNIX training. His specialty is network administration and security (firewalling, VPNs, intrusion detection). He has developed several training classes used by Red Hat and and other IT training companies in Germany. He has spoken on several SANS conferences and even more UNIX/Linux specific conferences. He as written several german books on Linux security.

Netzwerkberwachung mit Open-Source Tools
by Thomas Fritzinger, Jens Link and Christoph Wegener
Tuesday, 2006-09-05 10:00-18:00 and
Wednesday, 2006-09-06 10:00-18:00

Durch die wachsende Abhngigkeit unseres tglichen Lebens von einer funktionierenden IT-Landschaft und die gleichzeitig rapide zunehmende Komplexitt der dazu bentigten Infrastrukturen, gewinnen die Themen Netzwerkmanagement und Netzwerkberwachung immer mehr an Bedeutung. Zur Netzwerkberwachung existiert eine Reihe von komplexen und oft sehr teuren kommerziellen Werkzeugen. Dieser Workshop zeigt, wie man eine analoge Funktionalitt mit spezialisierten, freien und quelloffenen Programmen erreichen kann.

Themen im Detail/Ablauf des Tutoriums:

  • Organsisatorische Fragen
    • Mglichkeiten der Netzwerkberwachung
    • Business Planing / Business Continuity / TCO
    • Warum freie und quelloffene Software?
    • Bedeutung der Netzwerkberwachung beim Risikomanagement im Rahmen von Basel II und des Sarbanes-Oxley Acts (SOX)
  • Rechtliche Aspekte
  • Simple Network Management Protocol (SNMP)
  • Qualitative berwachung
    • Multi Router Traffic Grapher (MRTG)
  • Verfgbarkeitsberwachung
    • Nagios
  • Proaktive berwachung, Auswerten von Logdateien
  • Fehlersuche in Netzwerken mit Ethereal
  • Sicherheits-Monitoring
    • nmap
    • Nessus und neue Open-Source Alternativen
    • Snort

Die Inhalte werden im Vortragsstil vermittelt und durch praktische bungen durch die Teilnehmer am eigenen Rechner vertieft. Ergnzend steht das Vortragsscript als Folienkopien mit der Mglichkeit fr Notizen bereit.

Zielgruppe/Voraussetzungen:

Das zweitgige Tutorium richtet sich an erfahrene Systemadministratoren, deren Aufgabe die Betreuung, berwachung und Optimierung von komplexen Netzwerkumgebungen ist. Die Teilnehmer sollten bereits Erfahrungen mit der Installation von Programmen unter Linux haben und rudimentre Grundkenntnisse des TCP/IP-Stacks mitbringen.

Die Teilnehmer mssen einen Rechner mit einer aktuellen Linux-Distribution mitbringen. Hinweis: Benutzer anderer Betriebssysteme (*BSD oder MacOS) sollten sich vor der Veranstaltung mit den Vortragenden in Verbindung setzen.

Im Laufe des Workshops wird der Aufbau eines berwachungsservers auf Basis von Linux mit exemplarischen Diensten gezeigt und diskutiert werden. Dabei werden aber nicht nur die rein technischen Aspekte der Netzwerkberwachung beleuchtet, sondern auch die Grundlagen der notwendigen organisatorischen und rechtlichen Rahmenbedingungen aufzeigen und bercksichtigt. Nach der Veranstaltung knnen die Teilnehmer die gewonnenen Erkenntnisse dann selbstndig in die Praxis umsetzen.

About the speakers:

Thomas Fritzinger ist ausgebildeter Fachinformatiker fr Systemintegration. Seit 2002 ist er ebenfalls fr iAS ttig und leitet dort die Abteilung fr Networking Development.

Jens Link ist seit Jahren als Netzwerk-/Sysadmin ttig. In dieser Zeit musste er sich immer wieder mit den verschiedensten Netzwerkproblemen (auf allen zehn Ebenen des OSI-Modells) auseinandersetzen.

Christoph Wegener (www.wecon.net/)ist promovierter Physiker und Leiter des Bereichs Business Development bei der gits AG; auerdem ist er seit vielen Jahren freier Berater in den Bereichen Linux und IT-Sicherheit. Er ist Mitbegrnder der "Arbeitsgruppe Identittsschutz im Internet (a-i3) e.V.".

Asterisk for the beginner
by Stefan Wintermeyer
Tuesday, 2006-09-05 10:00-18:00

For those who haven't installed and configured an Asterisk server yet.

Agenda:

  • Install Asterisk on a Knoppix or Ubuntu
  • Configure a basic System with 2 SIP Phones
  • Handling of incoming and outgoing calles over a SIP Gateway (e.g. sigate.de)
  • Voicemailsystem
  • Basics about variables and how to use them
  • Codec and protocol
About the speaker:

Stefan Wintermeyer is author of the Asterisk book by Addison Wesley. His company amooma offers special Asterisk trainings.

Recovering from Hard Drive Disasters
by Theodore Ts'o
Tuesday, 2006-09-05 10:00-18:00

Ever had a hard drive fail? Ever kick yourself because you didn't keep backups of critical files, or you discovered that your regularly nightly backup didn't? (Of course not, you keep regular backups and verify them frequently to make sure they are successful.) For those of you who don't, this tutorial will discuss ways of recovering from hardware or software disasters. Topics covered will include a basic introduction to how hard drives works, filesystems, logical volume managers, and software raid on Linux. Specific low-level techniques to prevent data loss will include recovering from a corrupted partition table, using e2image to back up critical ext2/3 filesystem metadata, using e2fsck and debugfs to sift through a corrupted filesystem, and finally some measures to avoid needing heroic measures to recover your data in the first place.

About the speaker:

Theodore Ts'o has been a C/Unix developer since 1987, and has been a Linux kernel developer since September 1991. He led the development of Kerberos V5 at MIT for seven years, and is the primary author and maintainer of the ext2/ext3 filesystem utilities. Theodore currently serves on the board of the Free Standards Group and contributes to the development of the Linux Standard Base. He currently is a Senior Technical Staff Member with the IBM Linux Technology Center.

Building and Maintaining RPM Packages
by Jos Vos
Tuesday, 2006-09-05 10:00-18:00

Introduction

In this tutorial attendees will learn how to create, modify and use RPM packages. The RPM Package Management system (RPM) is used for package management on most Linux distributions. It can also be used for package management on other UNIX systems and for packaging non-free (binary) software.

The tutorial will focus on creating RPM packages for Fedora and Red Hat Enterprise Linux systems, but the theory will also apply to package software for other distributions.

Contents

General software packaging theory will be provided as a start, followed by the history and basics of the RPM packaging system.

The headers and sections of an RPM spec file will be discussed. Hints and tricks will be given for each section to enhance the quality of the target package, including the use of macros, adapting software for installing it in an alternative root directory, ensuring correct file ownerships and attributes, the proper use of pre/post (un)installation and "trigger" scripts, and how to deal with package-specific users and init scripts.

Package dependencies and conflicts will be covered, as well as some ways too tweak the automatically generated dependencies, if needed.

Installing files in the proper place requires knowledge of the Filesystem Hierarchy Standard (FHS), hence the basics of the FHS will be discussed.

The tutorial will also show how to properly package binary software, often done for internal system management purposes, and shed light on some of the issues involved, including some legal aspects related to packaging non-free software.

Package repositories and dependency resolution. Complementary to RPM, software exists for solving dependencies, such as up2date, yum, and apt-rpm. This software and the corresponding package repositories will be discussed.

Using RPM on non-Linux systems. Although primarly used on Linux systems, RPM can also be used to package software for other (free or commercial) UNIX-like systems. Some aspects of using RPM on non-RPM systems will be discussed.

Besides the theory, several issues will be illustrated with live demonstrations.

Target audience

The tutorial is targeted toward system administrators and software developers that want to create or modify RPM packages or get a detailed insight in the way RPM packages are built and can best be used. The attendees need no prior knowledge of RPM, although some basic knowledge of using software packages (as a system administrator using RPM, apt/dpkg, etc.) would be helpful.

About the speaker:

Jos Vos is CEO and co-founder of X/OS Experts in Open Systems BV. He has 20+ years of experience in research, development and consulting -- mostly relating to UNIX systems software, Internet, and security.

His operating system of choice since 1994 is Linux. In the Linux community he is best known for writing ipfwadm and part of the firewall code in the 2.0 kernel. Using RPM since 1996, he is known to nearly never install software without "RPM-ifying" it. He also participated in the design of RPM's trigger-sripts, later implemented by Red Hat.

His company X/OS delivers open, standards-based solutions and services. Products include support services for X/OS Linux -- an enterprise-class Linux distribution, custom-built firewall/VPN appliances with embedded Linux and high-availability cluster solutions.

extreme hacking - How to find vulnerabilities in your own network / application
by Roland Wagner
Tuesday, 2006-09-05 10:00-18:00

Statistics say your network will be attacked once in five minutes. Statistics also say 70%-80% of the attacks came from inside your network. How can you be sure you fixed every vulnerabilities on every server and workstation in your network? How can you be sure that your firewall works as it should? How can you be sure that your application has no vulnerabilities?

In this tutorial we will start with some basic information about the different phases of penetration testing and hacking and how a real hacker would try to attack your network. In the practical part of this tutorial you will act as a hacker. You will try to find as much as possible information about the target and how to use this information to find vulnerabilities. You will be able to find information about the vulnerabilities and finally will find a solution for your security problem.

During the session there will be some computers with operating systems available as victims but we can also scan the network/server of the participants if it's allowed to do that. So if you want to find vulnerabilities in your own network ask the administrator / security engineer of your network for permission to scan your network.

Requirements: You should be ready to install some basic tools on your private laptop and have good knowledge about the TCP/IP-protocol. Knowledge about commonly used protocols (http, ftp, smtp, pop, snmp, etc.) would be fine.

Important: Participants have to provide their own Linux-laptop with working network interface (ethernet, no token ring, fddi, etc. please :-)

About the speaker:

Roland Wagner is a long time unix / linux user, started with minix and later on in 1993 with linux-kernel version 0.98. He is working at Datev eG since 1999 as an IT-security engineer. He holds degrees in data processing technology (Datentechnik) from the Georg-Simon- Ohm-Fachhochschule in Nrnberg (Dipl-Ing. FH) - yes it's a homematch - and computer science (Dipl-Inf Univ.) from the University of Erlangen-Nrnberg. Main points of interest are embedded devices, intrusion detection and intrusion prevention systems, computer forensics and penetration testing. He is occasionally instructor for computer beginners and was talking about outsourcing of it-security incident handling at the IT-Incident Management & IT-Forensics (IMF 2003).

Asterisk for the geek
by Stefan Wintermeyer
Wednesday, 2006-09-06 10:00-18:00

For those who know how to install and setup a basic Asterisk server but who want to do some nifty stuff with it.

Agenda:

  • Short summery of the very basics
  • IVR
  • Variables and expressions
  • Programming in the extensions.conf
  • Programming with AGI
  • Meetingrooms (ConfCalls)
  • Queues
  • CallFiles
  • Misc
About the speaker:

Stefan Wintermeyer is author of the Asterisk book by Addison Wesley. His company amooma offers special Asterisk trainings.

Inside the Linux Kernel
by Theodore Ts'o
Wednesday, 2006-09-06 10:00-18:00

Topics include:

  • How the kernel is organized (scheduler, virtual memory system, filesystem layers, device driver layers, networking stacks)
    • The interface between each module and the rest of the kernel
    • Kernel support functions and algorithms used by each module
    • How modules provide for multiple implementations of similar functionality
  • Ground rules of kernel programming (races, deadlock conditions)
  • Implementation and properties of the most important algorithms
    • Portability
    • Performance
    • Functionality
  • Comparison between Linux and UNIX kernels, with emphasis on differences in algorithms
  • Details of the Linux scheduler
    • Its VM system
    • The ext2fs filesystem
  • The requirements for portability between architectures
About the speaker:

Theodore Ts'o has been a C/Unix developer since 1987, and has been a Linux kernel developer since September 1991. He led the development of Kerberos V5 at MIT for seven years, and is the primary author and maintainer of the ext2/ext3 filesystem utilities. Theodore currently serves on the board of the Free Standards Group and contributes to the development of the Linux Standard Base. He currently is a Senior Technical Staff Member with the IBM Linux Technology Center.

Using Xen to partition your system
by Kurt Garloff
Wednesday, 2006-09-06 10:00-18:00

Xen3 was released a little less than one year ago and it has found its way into Linux distributions since then. Virtualization is slowly becoming a mainstream technology. With multicore CPUs, larger machines become more common and offer the resources to host many services on one physical machine. Xen offers a way to partition the system into virtual machines that can be relocated to other physical machines as required.

The tutorial quickly covers some theoretical background on Xen and then moves on to utilize it. It explains how to plan for networking using various setups with bridging as well as other possibilities. It covers the various possibilities how to provide storage (a virtual disk) to the physical machines and discusses options. The setups that allow relocation of physical machines are treated in detail.

In the next section, we'll go through the process of building a virtual machine around a group of processes. This way we go a step further than server consolidation targets. We don't only move apps from a real physical machine into a virtual machine, but we continue partitioning the apps further into virtual machines, until we end up with a system of many small domains.

For the tutorial, it would be useful if the attendees bring laptops and have xen running already. We will provide virtual machine images.

About the speaker:

Kurt Garloff started hacking the Linux kernel when he tried to get his machine to reliably support an AM53C974 based SCSI adapter back in 1996 with limited success. He has since been involved in various open-source projects, mostly kernel related, but also compiler and security related topics. He works for Novell, tried to make SUSE Labs work well and now serves as the leader of the architects team. In his spare time, he works on Xen and creates Xen packages for the SUSE users.

Configuring and Deploying Linux-HA
by Alan Robertson
Wednesday, 2006-09-06 10:00-18:00

Intended Audience:

System Administrators and IT architects who architect, evaluate, install, or manage critical computing systems. It is suggested that participants have basic familiarity with System V/LSB style startup scripts, shell scripting and XML. Familiarity with high availability concepts is not assumed. This tutorial is intended to provide participants with both basic theory of high-availability systems and practical knowledge of how to plan for and install and configure highly-available systems using Linux-HA..

Description:

The Linux-HA project is the oldest and most powerful open source high-availability (HA) package available - comparing favorably to well-known commercial HA packages. Although the project is called Linux-HA (or "heartbeat"), it runs on a variety of POSIX-like systems including FreeBSD Solaris, and OS/X.

Linux-HA provides highly available services on clusters from one to more than 16 nodes with no single point of failure. These services and the servers they run on are monitored. If a service should fail to operate correctly, or a server should fail, the affected services will be quickly restarted or migrated to another server, dramatically improving service availability.

Linux-HA supports for rules for expressing dependencies between services, and powerful rules for locating services in the cluster. Because these services are derived from init service scripts, they are familiar to system administrators and easy to configure and manage.

This tutorial will cover planning, installing, and configuring Linux-HA clusters. Topics covered will include:

  • General HA principles
  • Compilation and installation of the Linux-HA ("heartbeat") software
  • Overview of Linux-HA configuration
  • Overview of commonly used resource agents
    • Managing services supplied with init(8) scripts
  • Sample Linux-HA configurations for Apache, NFS, DHCP, DNS and Samba
  • Writing and testing resource agents conforming to the Open Cluster Framework (OCF) specification
  • Creating detailed resource dependencies
  • Creating co-location constraints
  • Writing resource location constraints
  • Causing failovers on user-defined conditions
About the speaker:

Alan Robertson founded the High-Availability Linux [Linux-HA] project in 1998, and has been project leader for it since then. He worked for SuSE for a year, then joined IBM's Linux Technology Center in March 2001 where he works on it full time.

Before joining SuSE, he was a Distinguished Member of Technical Staff at Bell Labs. He worked for Bell Labs 21 years in a variety of roles. These included providing leading-edge computing support, writing software tools and developing voice mail systems.

Alan is a frequent speaker at a variety of international open source and Linux conferences.

Collaboration, Community and Future Technology
by Alan Cox
Thursday, 2006-09-07 10:15-11:00

The simple public sharing of information and source code has evolved both a cultural and legal/contractual basis that has produced Free Software, Open Source and has fed strongly into ideas like Wikipedia and the Creative Commons. Technology moves ever onwards and it is becoming more and more practical to share and evolve the designs for 2D and 3D objects, and soon it is likely to be practical and viable for people to "print" their own 3D goods. What common trends are found in the development of existing collaborationcultures and what can be predicted for the future?

About the speaker: T.B.D.
Linux as a Hypervisor
by Jeff Dike
Thursday, 2006-09-07 11:30-12:15

Currently, high-performance virtualization technologies employ either a specialized hypervisor (i.e. VMWare and Xen) or unmerged (and unmergeable in their current state) kernel patches (OpenVZ and vserver). It would be desirable to use Linux as the host OS, effectively making it a hypervisor. This would avoid adding another OS, with its own set of tools and management problems, to the workload of the host sysadmin.

Since virtualization is a relatively new workload for mainstream operating systems, Linux historically hasn't supported guests very well. This has been changing slowly, as performance improvements for User-Mode Linux (UML) have made their way into mainline. This situation is also about to change rather more quickly, as a kernel virtualization project has started, with the goals of supporting lightweight containers, such as OpenVZ and vserver, allow workloads to be containerized so they may be migrated easily, speed up UML by implementing some virtualization on the host, and provide support for resource management and control.

I will talk about the evolution of Linux hypervisor and virtualization support and where I see it going in the future. The historical aspect of this will be largely from the point of view of UML, since it is the only virtalization technology which uses the facilities of a standard Linux kernel to support fully virtualized Linux guests. Improvements to ptrace will be prominent, as it is central to Linux virtualization support. There are other helpful new facilties which are not primarily considered to be useful for virtualization, such as AIO, direct I/O, FUSE (Filesystems in USErspace), and MADV_REMOVE.

AIO and direct I/O allow for I/O and memory use improvements by eliminating the double-caching of data that is normally required when doing file I/O. FUSE, while not directly applicable to virtualization, turns out to enable some new management capabilities by allowing a UML filesystem to be exported to the host, where some important management tasks can be performed without needing access to the UML. MADV_REMOVE enables memory hot-plug for UML, which allows the host memory to be managed more efficiently.

These, and other new capabilities, while nice, are incomplete, and I will describe what is needed in the future, and how UML would make use of them.

Finally, I will describe the new virtualization infrastructure project with an emphasis on how it is useful to stronger virtualization technologies such as UML. While this is non-obvious, it turns out that the same facilities which can enable Linux to support lightweight containers such as vserver can also be helpful to full-fledged guest kernels. A prototype addition to this project allows UML processes to execute the affected system call (getttimeofday) at 99% of host speed. As this project is fleshed out, I expect similar performance from other important sets of system calls.

About the speaker:

Jeff Dike graduated from MIT, and went to work at Digital Equipment Corp, where he met a number of people who would go on to become prominent in the Linux world, including Jon Hall and a large contingent which now works at Red Hat. He left Digital in 1993 during the implosion of the mini-computer market. He spent the next decade as an indepentent contractor, and became a Linux kernel developer in 1999 after conceiving of and implementing UML. Since then, UML has been his job, becoming a full-time paid one in mid-2004 when he was hired by Intel.

Linux on the Cell Broadband Engine
by Ulrich Weigand and Arnd Bergmann
Thursday, 2006-09-07 11:30-12:15

The Cell Broadband Engine Architecture, jointly developed by Sony, Toshiba, and IBM, represents a new direction in processor design. The Cell BE processor features in addition to a PowerPC-compatible PowerPC Processor Element (PPE) an array of eight Synergistic Processor Elements (SPEs) supporting a new SIMD instruction set that operates on 128 vector registers. The SPE memory architecture is characterized by a directly addressable 256 KB local storage area plus an explicitly programmable DMA engine to access main memory. Typical applications that benefit from this architecture are in the areas of game, media, and broadband workloads.

To exploit the capabilities of the Cell BE architecture, an application will use both PPE and SPEs, with computational kernels running on one or multiple SPEs, and the PPE orchestrating computation and data flow. The Linux operating system has been extended to support this new type of applications, in addition to regular PowerPC user space code. To that purpose, new kernel interfaces and user space libraries providing access to SPEs have been created. A port of the GNU tool chain allows code generation for the SPE instruction set. Most of the kernel changes are included in recent releases. For tool chain changes, work on upstream integration is still in progress.

In the paper, we will present an overview of the Linux kernel changes required to support Cell BE applications. We will also discuss the user space API and ABI that allows to build applications comprising both PPE and SPE components. The question how to debug such applications will also be addressed. Finally, we will talk about future enhancements to Cell BE support.

About the speakers:

Arnd Bergmann works for the IBM Linux Technology Center in Bblingen, Germany. He currently maintains the Linux kernel platform code for the Cell Broadband Engine Architecture.

Before joining IBM in 2002, he studied computer engineering in Osnabrck, Germany and in Espoo, Finland. He has been active in the Linux community for about eight years now, with the major contributions in the areas of the System z architecture (aka s390), 64-bit platforms in general, as well as digital media.

Dr. Ulrich Weigand works for the IBM Linux Technology Center in Bblingen, Germany, where he is currently working on the GNU toolchain for the Cell Broadband Engine Architecture.

After receiving a Ph.D. at the Chair of Theoretical Computer Science at the University Erlangen-Nrnberg, he joined IBM in 2000. He has since been working on the port of Linux to the System z architecture, with primary responsibility for the GNU compiler and toolchain for that platform. He is maintainer of the System z back-end in both the GNU compiler and debugger.

Personal Firewalls for Linux Desktops
by Andreas Gaupmann
Thursday, 2006-09-07 12:15-13:00

A personal firewall differs from a traditional firewall that filters traffic between an untrusted and a trusted network. While network firewalls are installed on routers and bastion hosts, personal firewalls are installed on desktop systems. Another distinction can be made with respect to the subject to protect. The former aim to protect a network whereas the latter are designed to protect a user. In this paper, it is shown how a personal firewall for a Linux desktop can be implemented.

Linux as foundation for desktop systems is on the rise. An increased use of Linux on desktops will also lead to more security incidents caused by viruses, worms, and trojans targeted specifically or also at Linux systems. Furthermore, a personal firewall is an important building block in an overall security architecture that aims to protect specifically the user of a desktop system.

The strict separation of kernel space and user space in Linux systems necessitates a layered architecture of the personal firewall. The enforcement of filter rules is only possible in the kernel (enforcement layer). For this purpose, a kernel module has been implemented which uses the Linux Security Modules (LSM) framework of the kernel. The other two components are implemented in the user space. The decision layer as well as the graphical user interface of the personal firewall are both located in user space.

The decision layer of the personal firewall may be described as event based access control. In this model, security events are actions of applications that might lead to the compromise of the host or a disclosure of private user data. Four types of security events are filtered in order to prevent these breaches of security. Application starts are controlled in order to prevent the start of untrusted programs. Additionally, it is checked whether executables have been replaced by comparing checksums of files. Incoming connections are filtered to regulate the access to local services from remote hosts. Outgoing connections are controlled to prevent applications from connecting secretly (unauthorized by the user) to remote hosts.

Security events are allowed or denied by evaluating rules in a SQLite database. If a matching rule is not found in this database, then the user is asked to decide the verdict on the security event. The graphical user interface provides a user-friendly way for managing security events.

About the speaker:

Andreas Gaupmann is a graduate student at the University of Applied Sciences at Hagenberg, Austria. His field of study is "Secure Information Systems". Recently, he has finished his diploma thesis "Design and Implementation of a Personal Firewall".

He is holding a Bachelor degree in Computer Science from the University of Applied Sciences at Hagenberg, Austria. The topic of a Bachelor thesis was "Secure Programming - Buffer Overflows, Race Conditions, and Format String Attacks".

Moreover, he is the auther of a patch for OpenSSH that enables user authentication according to a Zero Knowledge protocol. The website of the project is located at zk-ssh.cms.ac/. He has presented the results of these work at the IT security conference "Sicherheit 2006" in Magdeburg.

Smart Card Technology and Linux Integration
by Heiko Knospe
Thursday, 2006-09-07 12:15-13:00

This paper discusses the use of hardware security modules with Linux based host systems. Microprocessor based integrated circuit cards with cryptographic capabilities (smart cards) are already well established security modules, and new type of tokens (with USB, MMC and contactless interfaces) have evolved.

Smart cards can be used for a variety of (mostly security-related) applications: identification, authentication, signature, encryption, secure key and data storage etc. Smart cards are connected via an interface device (reader) to a host system (e.g. a PC). The use of smart cards requires an on-card application, a reader driver and host middleware and software.

During the last couple of years, a number of projects developed software, middleware and drivers for Linux on the host side. The paper analyses major on-card and off-card architectures and implementations, and explains their interplay:

  • Different type of cards or tokens and (quasi-)standards (ISO 7816, Global Platform, GSM SIM, PKCS#15, Java Card, ...)
  • Reader drivers (OpenCT and PC/SC architecture)
  • Interface standards and APIs (in particular PKCS#11)
  • High-level APIs and libraries (e.g. Open Card Framework)
  • Software, tools and smart card enabled applications (OpenSC, MuscleCard, PAM modules, OpenSSH, OpenSSL, Mozilla, ...)

The paper concludes with an outlook on trends in hardware security modules and their applications.

About the speaker:

Heiko Knospe is a Professor for Mathematics and IT Security at Cologne University of Applied Sciences (FH Koeln). His research interests include security of Next-Generation-Networks, AAA protocols, mobile security and cryptographic tokens. He conducted a number of projects in these fields.

Benchmarking, round 2: I/O performance
by Felix von Leitner
Thursday, 2006-09-07 14:30-15:15

In the last round of benchmarks, presented at Linux Kongress 2003, I showed some benchmark results for the BSDs and Linux, mostly concerning scalable network programming. The results have led to marked improvements in scalability on most players.

This round of benchmarks will try to do the same, but for I/O performance. We have taken a real life data set from a high volume production system, and replayed several thousand HTTP requests. We also had more modern hardware (gigabit ethernet, SMP) at our disposal, and tried to exploit it with varying success.

The results surprised us in several cases, and provide some interesting lessons to be learned. This round, we also measured some commercial operating systems.

About the speaker:

Felix von Leitner has been involved with Linux since version 0.98 and has focused on scalability and high performance for years.

In his professional life, he consults companies about IT security for a small security company called Code Blau he co-founded. He spent most of this year doing code audits of commercial software.

Samba status update
by Volker Lendecke
Thursday, 2006-09-07 14:30-15:15

Samba 3 has undergone quite a number of changes in the last months, and we are still changing it rapidly. Hopefully for the better. In this talk I will present the latest development in detail, such as:

  • The handling of users, groups and SIDs has been completely re-worked. A consequence is that nested groups ("local" groups in Windows-speak) now really work.
  • Management of users and groups is done by a new "net sam" utility, "net groupmap" is deprecated.
  • Clustering support is undergoing quite a bit of development right now, possibly at the time of the conference I can give a live demonstration of what we are doing. If not, I will present current status.
  • Another field of work is remote management and monitoring. Depending on how development goes I will present what is being completed at the time of the conference.
  • We have put a quite bit of effort into porting back some Samba4 infrastructure to Samba3. Transaction-based talloc and auto-generated MS-RPC stubs using PIDL are examples. I will present the current status in this area.
About the speaker:

Volker Lendecke is Samba Team member since 1994, and has been active in Samba development since that time. Volker is also co-founder of SerNet Service Network GmbH in Gttingen, Germany and doing lots of Samba consulting, development and trouble shooting there.

File System (Ext2) Optimization for Compressed loopback device
by Kenji Kitagawa
Thursday, 2006-09-07 15:15-16:00

We developed a file system optimization tool "Ext2optimizer" that re-arrange data-blocks for a compressed loopback block device.

In recent years, block device level compression is widely used especially for live CD, such as cloop and SquashFS. Although it requires runtime decompression, the de-compression process is faster than reading uncompressed data because bandwidth of CD-ROM is narrow and current CPU is fast.

However there is block size gap between file system (4KB is default on ext2) and compressed loopback device (64KB is a default on cloop). It causes redundant read access and slow boot time. Furthermore file system doesn't consider suppression of disk-seek on a compressed block device.

Ext2optimizer re-arranges the data-block on ext2 and groups together in order to put into fewer blocks of compressed block device. It reduces block-accesses and disk-seek. The optimization is based on a profile of reading files. It makes fast boot if a profile is taken at boot time. It also makes fast execution if a profile is taken at run time of the application.

Ext2optimizer doesn't change the ext2 format and allows using as a normal e

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.