Tamanho: px
Começar a partir da página:



1 A SOFTWARE FRAMEWORK FOR PORTABLE AND AUTOMATED SYSTEM SECURITY HARDENING Cristiano Lincoln Mattos Centro de Estudos e Sistemas Avançados do Recife - CESAR Universidade Federal de Pernambuco CIn/UFPE Tempest Security Technologies Evandro Curvelo Hora Centro de Estudos e Sistemas Avançados do Recife - CESAR Universidade Federal de Pernambuco CIn/UFPE Universidade Federal de Sergipe DCCE/UFS Tempest Security Technologies Fabio Silva Centro de Estudos e Sistemas Avançados do Recife - CESAR Universidade Federal de Pernambuco CIn/UFPE Marco Antonio Carnut Centro de Estudos e Sistemas Avançados do Recife - CESAR Universidade Federal de Pernambuco CIn/UFPE Tempest Security Technologies ABSTRACT Statistics show that most successful attacks perpetrated in computer systems exploit vulnerabilities for which there is already a correction, sometimes months or years old. In many cases, the attacks use services unnecessarily activated, or can be avoided with proper system configuration. System security hardening can be used to counter this, but unfortunately most system hardening practiced today is entirely manual, with not many automated security hardening software available. This paper describes the design and structure of a security software which defines a framework for automating the task of system hardening in a portable and extensible manner. The framework defines the notion of security plugins, reusable components which execute specific system hardening measures, using features provided by the framework, such as user interface, undoable actions, configuration scheme and logging. 1 INTRODUCTION The concern for security in computer systems grows continuously, with the rise of Internet usage and the spread of computers in corporate and institutional scenarios. This concern is fueled by the number of attacks being perpetrated by crackers and (increasingly) disgruntled employees (CSI/FBI, 1998). A marking characteristic of the majority of successful attacks is the fact that they exploit security bugs for which patches or corrections were already available at the time of the attack, in many cases, for months, or even years. Also, many of these bugs are located in software (or in configurations of the software) that in many cases is not even necessary to the system s function. Two points stand out clearly from this scenario: first, that market pressure has driven software developers (specially operating system vendors) to orient their products for out-of-box functionality, instead of security. That being the case, the default installation of these systems carry many potentially unused services and functionalities, which will be activated even being unnecessary. This trend is seen not only in operating systems but also in other types of software like web servers, corporate groupware suites, routers, etc. The second point is that IT personnel (system administrators, security officers, etc.) are simply not coping with the task of keeping up with the latest security measures, even the most simple ones, such as applying patches and corrections when new bugs are discovered, or changing default installation configurations to reflect proper security needs. System (security) hardening is one way to handle the security problems in this situation. System hardening can be described as the act of adjusting system configuration and softwares to increase the level of security of a computer system it can consist of, for example, disabling unused software, applying security patches, changing system configurations, etc. System hardening is applied mostly to operating systems but is also common for web servers, routers, and so on. Hardening a computer system is in general a manual task, and highly system-specific: the exact software corrections or configuration changes vary from one system to another (and with functionality requirements, too), and can bring unexpected consequences if carried out incorrectly. There are software tools that strive to automate the process of system hardening. But, as will be showed, they are in many ways limited against the problem. This paper describes the design and structure of a security software which defines a framework for automating the task of system hardening in a portable and extensible manner, while also providing key features such as undoable actions and logging, among others. The framework defines the notion of security plugins, reusable components, which execute specific system hardening measures. First a general outline will be given of the problem faced, along with an overview of the implementation issues and design criteria for the software tool. Next, the framework defined by the software is presented, with the guiding rationale,

2 structure and needs, followed by future work possibilities on this project. 2 PROBLEM OUTLINE System hardening, unlike other areas of information security technologies like firewalls and intrusion detection, has not seen much research or effort into developing better standards and techniques. This continues to happen, even though experienced security professionals and system administrators recognize it as an essential practice to maintaining a computer system s level of security. The key words in defining the current state of system hardening technology, in a wide sense, are fragmentation and lack of automation. The lack of automation is evidenced by the high number of security hardening guides and security checklists: documents which detail what type of measure the administrator has to manually apply to his system. In contrast to the number of guides, there are very few tools to assist in automating the job, such as Bastille Linux 1 or HardenNT 2. The fragmentation appears when we consider that each of these documents are highly specific, targetting only certain systems or even only profiles of systems (i.e., public web server, private web server). The class of software tools mentioned above are also oriented to specific versions of operating systems. The end result is a large and heterogeneous environment and only two options for the administrator: rely on the different, specific, guides and apply the hardening manually to each system, or use a lot of software tools, with different features, interface and logging schemes to automate part of the process. The problem escalates when you consider that nowadays it is necessary not only to harden operating systems, but also the thirdparty daemons of each service offered, such as web, mail, FTP, groupware, file and print, etc. Security hardening is a practice which, by it s nature, has to be specific to each system, as they exhibit critical differences and behaviors in response to system changes. On the other hand, there are certain similarities in the process that cannot be ignored, even across platforms (i.e., checking which patch-level a system is at, and downloading necessary corrections). These similarities are even higher in different types of system of the same family, such as between the various flavors of the UNIX family, or between Windows NT 4.0 and Windows In this type of situation, a system hardening tool which defined a common framework for the various platforms would help. Even for systems in heterogeneous environments, that don t have many similarities in the security hardening process (i.e., between UNIX and Windows) there is benefit from using a tool within a common framework: consistent mechanisms for key features would be provided, such as interface, logging and undo, for example. The idea behind the proposed software is based on (Hora, 1999) and defines a common framework in which different requirements for security hardening of systems can be provided. This framework will accommodate different kinds of security checks and measures, tailored to specific system necessities. The design criteria behind this idea can be broken down into: Portability: the tool must be portable to various platforms. Although this property is not mandatory for specific security checks and measures (though they should be as portable as possible), the basic structure which defines the framework should be; Extensibility: the software must be highly reusable and extensible. In fact, most of it should be implemented through reusable plugins, objects that encapsulate a specific functionality. In general, a security check on a system will be implemented as a plugin (or a class of plugins), but key features of the framework itself, such as logging or undo, should also be based on a pluginnable structure; Flexibility: each security check will need control data to dictate which parameters it should use for it s functions. The framework should define a common configuration format which will provide these parameters to the plugins (security checks). This configuration format will have to be flexible, to accommodate different needs of different types of plugins; Administrative issues: the software tool, and the framework it provides should strive to be as friendly and practical to the user as possible, providing features like configurable logging, the possibility of a graphical user interface and undoing actions taken by the software on a system. The implementation of the software is under way, and it is currently called PASH Portable Automated Security Hardener. By providing a consistent framework with features like the ones mentioned before (logging, undo, user interface, etc.), we hope to strike a common ground in developing security hardening measures. The framework offers these set of services to the plugin developer, which would implement only a specific security check, using the services offered by the framework. An example of such a plugin would be a UNIX security check which identified and

3 corrected the unnecessary files with the setuid flag set: by using the framework s features through a standardized API, the plugin could also provide, for instance, undo of it s actions and a uniform user interface, with no need to implement them. In a Microsoft Windows context, another example would be a plugin defining certain registry security keys, or applying access control lists, or disabling unused daemons, etc. The PASH framework is to be released as opensource software (Raymond, 1997), with it s development (and of it s plugins) being open to the community at large, a system development policy that is known to work for extensible and pluginnable security tools 3. PASH plugins will have extreme flexibility over their actions, so that nearly any kind of check that can be implemented in the chosen programming language can be used. Given it s flexible structure, and providing services like reporting, the tool could also be used for automated security auditing, given that in many cases there are common functions performed in auditing and hardening. The software is to be used basically on corporate computer servers, by a system administrator or security engineer. Like all system hardening measures, it should be executed as soon as the base operating system and software is installed, before being moved into a fully-working environment. 2.1 Implementation Issues The design criteria for portability, extensibility and flexibility is influenced heavily on the decision of which programming language should be used to implement PASH. The language chosen was Perl Practical Extraction and Report Language (Wall, 2000). Perl fit s nicely into the requirements defined by the design criteria: It s an extremely portable language, being available and well supported in practically every UNIX flavor (Solaris, Linux, HP-UX, AIX, Digital Unix, Irix, UnixWare, etc.) and also in the Windows platforms (Windows NT, Windows 9X, Windows 2000, etc.). Even while being portable, it has functions specific to each platforms, essential for a thorough hardening process: it has APIs to encapsulate UNIX system calls and APIs to access Windows services, like the registry; It is a very popular language. That leads to a multitude of publicly available libraries, modules and packages providing different functionalities, through the CPAN repository 4. These modules are essential in mantaining the flexibility component of PASH, which can use these functionalities, instead of reimplementing them. Another advantage to being a popular language is that the plugin writer community would be larger; While being primarily a structured language, Perl has incorporated object-oriented aspects into the language as well. Most object-oriented programming aspects are present, such as inheritance, polymorphism, data encapsulation, and so on (Budd, 91). While at it s core still being a structured language, Perl s OO features are enough to guarantee the flexibility and reusability of the PASH components and plugins; Perl is a very powerful and expressive language. At the same time that it can reference low-level structures like pointers to memory, it already incorporates features for high-level data structures, like hashes, for example, while still being simple to program. It s core features and modules provide a very flexible base on which to build the software. Like any other type of programming which requires portability, the developer has to restrain himself to, if possible, use only portable functions in his code. This is important even between common platforms, such as between the various UNIX flavors: slight differences in each different UNIX OS can be overcome by using the verifiably portable functions provided by Perl. 3 PASH STRUCTURE One of the goals in the development of the software was to make it as flexible as possible. This implies a software architecture heavily based on object-oriented programming and reusable components. Throughout the entire structure of PASH this concern is evident. An overview of the structure of the software will be provided, followed by specific information on each area of this structure. 3.1 Structure Overview The PASH tool is structured in five major areas: logging, undo, interface, configuration and security plugins. The security plugins are the core of the tool. A security plugin should be implemented for each type of hardening method that is to be executed in other words, a security plugin will represent a specific security check or hardening process, for a given type of system. These plugins are objects, 3 Nessus 4

4 implemented in Perl (as Perl modules), with a welldefined interface for interaction with the rest of the framework. On loading, the plugins register themselves within the framework, enabling their execution. The plugins can have categories associated with them, and their execution can be based on this category. For example, if a plugin identifies itself as a hardening method for a UNIX system, it will never be instanced for execution on a Windows system. The security plugins need configuration information to be able to function adequately. This information is processed and provided to each plugin by the configuration module. Most plugins will need various different parameters, with specific formats and values. The framework cannot anticipate the number and type of information a plugin will need to properly function. For example, a given Windows plugin might need to receive a list of registry keys to change, while a UNIX plugin might need to receive the path to a server configuration file. In light of this need, all the configuration information of PASH and of it s plugins is handled in configuration files. These configuration files are basic ASCII text files, with a specific hierarchical format, similar to a tree structure, for assigning information and parameters to plugins. The structure is based in configuration blocks, where each block can contain keys (in other words, fields or parameters) with assigned values. Each block can also contain sub-blocks, allowing a hierarchical nested combination. The number of blocks and keys available is limited only by the system s available memory. This type of structure gives the necessary flexibility in specifying information for the plugins, allowing complex hierarchical structures. In fact, this configuration format is used not only for the security plugins, but also for the general configuration of PASH and it s basic modules like logging, undo, etc. It will be detailed in an appropriate section. The framework provides four basic types of services to the security plugins: configuration management, logging, undo, and interface. Security plugins have access to logging functions through the log module. This module, also based upon plugin objects, provides for the storage of logging information in a specified log format. The verbosity of the log can be defined on a global level, or overriden by the plugin. One advantage of this type of approach is that you can create another type of logging plugin (one that stores the logs in an DBMS, for example), and offer it transparently to the security plugins. The undo functionality is an important feature provided by the framework to the security plugins. Through this scheme, before the plugins take any type of action that will change the system s configuration or state, they would have to successfully register that action with the undo module. The undo module saves the information necessary for undoing the particular action. If the user later wants to undo an action executed by a PASH plugin, an interface is provided where he can to undo each action ever performed by a plugin. The undo feature is implemented in a way as not to limit the functionality of the security plugin. Last but not least, the framework also provides an user interface functionality to the security plugins. No security plugin should interact directly with the user, through normal Perl functions instead, the framework s user interface functions should always be used. Currently, only a text-based user interface is implemented, but when a graphical user interface is available, it will transparently replace the interface used by the plugins. The user interface also implements a series of functions to provide common functionalities needed in the interaction between plugins and the user, such as multiple-choice questions, questions where a user is required to type something, and others. The entire software evolves around the concept of plugins, extensible objects with a defined function which can be reused, inherited, extended, etc. In fact, not only the security checks are plugins: all the other major features such as logging, interface and undo are also plugins, which are packaged together in the common framework. This type of structured approach guarantees the necessary flexibility needed for this kind of tool. In this context, the security plugins need to access the services provided by the framework. This is done through the use of a manager object. This object is passed to the plugin in it s initialization, and will be used by the plugin to communicate with the rest of the framework. The manager object for a given plugin contains all the plugins configuration information, already processed by the configuration module. The manager object also acts as an object factory (Gamma, Helm, Johnson, Vlissides, 1995): when a plugin needs logging, undo or interface functionalities, it will request one to the manager which will, through it s internal configuration schemes, instantiate the appropriate object and pass it to the plugin, which from then onwards will have an object providing the needed functionality. With this approach, the manager separates the plugins from the rest of the framework. By doing this, encapsulation is enhanced giving the freedom for the manager to decide, for example, which type of interface (text, graphical) object should be instantiated and returned to the plugin.

5 3.2 Configuration Module The configuration module supplies the necessary information for each plugin to work properly. It does this not only for the security plugins, but for the other services implemented as plugins, such as logging, undo, etc. It s configuration information comes from an ASCII text configuration file, in a specific hierarchical format. The format defines configuration blocks, with labels to identify each block. Inside a block, there can be any number of key/value pairs; there can also be sub-blocks, with specific labels, and these sub-blocks can also have key/value pairs, other sub-blocks, and so on, as illustrated in Figure 1. block_label_a { key1 = value1 key2 = value2.1 value2.2 block_label_b { key3 = value3 key4 = value4 } } block_label_c { key1 = value5 } Fig 1. Example of configuration file format There are no restrictions on the number of blocks or keys. The values of the keys are considered to be whatever comes after the = sign and before the end-of-line character. There are no imposed format to the the values of keys: they are for the plugin interpretation. The configuration module interprets the configuration file format and stores it in a hash structure, sub-blocks included. A plugin needs to know only it s configuration information; because of this, the information must be contained in block with a label that identifies the plugin, and it s name. When the plugins is instantiated, it will be passed only the information contained in it s labeled block (including sub-blocks), in a hash structure. This same usage of the label to identify a block s purpose is used for configuration information of the other types of services, such as logging, undo, etc. An excerpt of an example configuration file can be seen in Figure 2. plugin suidcheck { active = yes interactive = yes source = /PASH/SuidCheck.pm name = SuidCheck category = Unix/Linux Unix/Solaris suid_exclude_list = su passwd login sudo } global_options { interactive = yes plugin-dir = /PASH/plugins/ log { log-dir = /PASH/logs/ log-name = PASH.log log-level = 2 } undo { undo-dir = /PASH/undo/ undo-journal = /PASH/undo/journal undo-plugins { plugin backup { name = UndoBackup source = /PASH/UndoBackup.pm } plugin attribute { name = UndoAttribute source = /PASH/UndoAttribute.pm } } } } Fig 2. Excerpt from sample configuration file The first block, labeled plugin suidcheck defines the configuration options for the SuidCheck plugin. The key/value pairs of the block will be passed as a hash structure to the plugin, when it is initialized for execution. The following block, labeled global_options will define options pertaining to the global framework and it s services. Note that the logging service has a sub-block of it s own, as does the undo service. Reinforcing the idea of reusable components (plugins), much of the undo functionality is implemented as plugins themselves. Each of these undo plugins, then, has his own configuration block. Using this type of configuration structure, there is extreme flexibility in the configuration of each plugin. The plugin developers use a defined configuration format, which allows them to express nearly any type of information necessary for the correct functioning of the plugin. The user, on the other hand, can easily find and edit the configuration for each plugin, tailoring it s execution to the users specific needs. 3.3 Logging Module The logging module provides a way for the plugins and objects of PASH to register their actions, not only for debugging but for auditing purposes. If a plugin needs to log some type of activity (most do, if only for logging errors), it will request a Log object to the manager object, which will instantiate one and return it to the plugin, who will be able to call methods on the returned object. The logging module defines four levels of verbosity, which control how much information should be logged: Level 0: essential information that must be logged, such as initialization information, critical errors, etc.;

6 Level 1: informative messages describing generically the current actions that are being executed by each plugin; Level 2: messages describing in details what is being done by each plugin, allowing the user to know exactly what is being performed; Level 3: debugging messages that are mostly useful for developers, with no effort to be userfriendly or descriptive. The plugins call logging methods on the log object, specifying the text to be logged and the level of verbosity that the text should be logged at. The user, through command-line options or through the configuration file, defines the level of verbosity that should be adopted for the execution session. Only the messages with a specified verbosity level less than or equal to the session verbosity level will be logged. For example, if a plugin tries to log a specific message at level 2, it will only be effectively logged if the global session level is 2 or higher. This way, the user can control exactly how much information will be logged, while still retaining the possibility of logging more information when needed, by adjusting the session log level, with no changes to the plugins. Currently, the log module only stores the logged information in a specified text file, based on configuration information. However, using the plugin architecture, it is possible to extend the module to log the information in other manners. For example, a useful extension to the module could implement logging through the UNIX syslog service, allowing online remote logging. Another extension could store the logs in a DBMS, for later retrieval. The possibilities are varied, and can be adapted to different needs. 3.4 Undo Module The actions taken by a security plugin have the potential to disrupt system services, by changing configurations, disabling services, installing patches, etc. In fact, this is probably one of the major reasons why the use of automated security hardeners is not widespread: the fear of incorrectly changing the system state, and then having to correct the damage. One of the ways in which PASH addresses this problem is by providing undo functionality to its security plugins. A plugin would use an undo object (requesting one through the manager object) to register undo information before executing an action which can change the system state, such as deleting a file, changing an attribute, disabling a service, and so on. The undo object would then take the necessary measures to save any information that could later be used to undo the action. There are two main issues at hand, when considering the functionalities provided by the undo module. The first issue is that the choice of whether or not to register undo information, and the choice of which undo information to register is the sole responsibility of each plugin. In other words, the plugin developer must have the discipline of calling a method on an undo object to save the information before taking actions which would change the system state. If the plugin doesn t do this, it will not be possible to undo the action later. This procedure, although transferring the responsibility to the plugin developer, fits neatly into the design criteria of flexibility. The alternative would be for the framework to implement a series of wrapper functions, through which the plugins would access and change the system state, and automatically have their undo information saved. Although easier for the plugin developer, it would be a limited scenario: undo information would only be available for functions and actions which the framework had previously implemented, effectively limiting the possibilities of each plugin. The second issue is that the information that should be registered to sucessfully undo an action is highly dependent upon the nature of that action. For example, if a plugin is going to change attributes of a file (such as permissions), it need only register for undo (in other words, save) the current attributes of the file, before changing them. Because of this, the undo module has to be flexible enough to treat these different necessities, even among different platforms: there is no need for a type of undo that saves information about Windows registry keys being loaded in a UNIX system, for example. This leads to the natural conclusion that the undo scheme should also use the plugin architecture. Each different type of undo plugin would implement the necessary undo procedures for a different type of action. The security plugins, knowing what type of action they will take, would use the appropriate undo plugins for that action. The undo plugins can be used between different platforms, or can be used only on specific platforms. Backup-undo plugin: this undo object, when used, will create a backup copy of a specified file, automatically saving it s attributes (owner, permissions, etc.). It can be used when a security plugin is going to change or remove a file, guaranteeing that it can be restored later;

7 Move-undo plugin: this undo object should be used when a security plugin is going to move a file from one location to another. The undo object will save the location information so that the file can be moved back later; Create-undo plugin: this undo object should be used when a security plugin is going to create a new file on the system. The undo object will save the information on which file was created so that it could be undone (deleted) later; Attribute-undo plugin: this object should be used when a security plugin is going to change file attributes (owner, group, permissions, etc.). The undo object will save these attributes so that they could be later restored. In this case, types of attributes differ between UNIX and Windows systems, so there would have to be an attribute-undo plugin for UNIX and another for Windows, probably inheriting common properties from a parent object. As seen, the plugin model permits flexibility for the development of new undo actions, according to rising necessities. Each undo plugin stores the undo information it collected in a journal file. The information registered in the journal varies with each type of undo plugin used, but it should be enough to restore the system back to the previous state. Each entry in the journal is also timestamped and labeled, easing posterior processing of the file. In the same way that each undo plugin knows what information needs to be registered in the journal, it also must know how to process that information, effectively undoing the action. In this sense, the undo plugin represents an entity which will not only register the information, but also effectively execute the operations necessary to undo the action. For example, if a security plugin needs to change the permissions of a file, it would first make a request to the manager object for an attributeundo object, who would instantiate one and return it to the security plugin. Then, before executing the attribute change, it would call a method in the undo object, passing it the necessary parameters in this case, just the file name is enough. The undo object, based on it s parameters, would save the necessary information for undo in the journal: the file s current attributes. After receiving the confirmation of success in saving to the journal from the undo object, the security plugin could continue on with it s actions. In fact, if needed again, the same undo object could be reused to register other actions; if a different type of action (for example, deleting a file) was to be performed, the security plugin would have to request to the manager an instance of a different undo plugin. If after the execution session, the user decides to undo one or more actions taken in the session, he will execute PASH with the appropriate commandline options, indicating undo mode. The journal will be processed, identifying all the undoable actions that were taken in each past execution session. The user will be given choices on which sessions or actions he wishes to undo. Upon choosing a set of actions to be undone (in this example, the change of a file attribute), the label information in the journal will be used to identify which undo plugin registered the undo information for that specific action. That undo plugin will be instatiated and used to effectively undo the change, restoring the original attributes to the file. As with any undo system based on journaling, the preferred method of restoring the actions is from the most recent registered changes to the oldest registered changes, so as to minimize the possibility of one undo change impacting on another. This method of undo is the default one, but the option of individually examining each registered action and undoing specific actions (out of any order) is also presented. 3.5 Interface Module Usability and, to a certain extent, a friendly user interface are key features in making a software widely used. The PASH framework strives to meet this necessity with it s interface module, while at the same time providing the security plugins with a simplified API for interaction with the user. Each plugin developer could, using the features of the Perl language, develop a user interface of it s own. The problem is that this would lead to different user interfaces for each plugin, with different options and assumptions on the user. The only way to fix this is to strive for a common interface between the security plugins, by having them used a standardized API, provided by the framework, for interaction with the user. There can be many options of user interfaces available to the framework: text-based interfaces, with the user typing in most commands or with the presence of menus; graphical user interfaces with the use of windows, buttons and events; and even other variations of this theme: for example, a webbased user interface could be conceived, where the user would control PASH through a browser. Each different type of interface is to be implemented as a plugin, borrowing from the concepts already demonstrated in other modules. The user, through the configuration file, could choose which user interface is preferred for use. This could also be chosen automatically, depending

8 on context. For example, when executing on Windows systems, a GUI could automatically be started, while when on a UNIX shell, the text-based approach could be more suited. Thus, there would be plugins for both types of interface, loaded according to the necessity. Since the plugins all use a common API for interacting with the user, the offered interaction would be the same, no matter what different interface would be used. Most of the necessary interaction between security plugin and the user can be fit into welldefined cases. Functions to implement these cases have to be provided for by the interface plugin. The following functions are currently implemented: Simple display of information: the plugin wants to output a message to the user, by calling a method on the interface object with the message as a parameter. In a text-based interface, this would print the message on the screen. In a graphical interface, this could pop up a message box, or print the message on a window panel, for example; Ask the user to type in a string of text: the plugin needs the user to type in a certain string of text. A method on the interface object is provided for this, with the plugin passing a descriptive message (such as Type in the name of the file ), and the default value to be assumed, should the user type nothing; Ask the user to choose between a few options: the plugin needs the user to choose between different options. A method on the interface object is provided for this, with the plugin passing a descriptive message (such as Answer Y for Yes, N for No and C for Cancel ), the possible options (in this example, N, C and Y ), and the default value to be used; Ask the user to choose between numbered options: the plugin needs the user to choose one of many possible options. A method on the interface object is provided for this, with the plugin passing a descriptive message, the list of possible options, and the default value to be used; These are but a few of the possible functions offered by the interface plugin to the security plugins. Note that the emphasis of the offered functions is in obtaining or displaying information, not on how that information is display or obtained. With this kind of independence a plugin s user interface can be changed from text-based to graphical-based with no change in the plugin itself. 3.6 Security Plugins The security plugins are the center of the framework all the other services mentioned above are provided to help the plugin developer. The goal of each security plugin is to implement a specific security hardening procedure, where at all possible using the features provided by the framework. Each security plugin can implement more than one security check or hardening procedure. If the need arises for new checks, a plugin can be extended, or another plugin can be developed. This decision is very specific, depending on the nature of the hardening procedure, the modularity of a given plugin, and other criteria. In conforming with the design criteria of flexibility, the security plugin developer can do basically anything that the Perl language allows him to. Given Perl s properties, the author can implement practically any task available in a security hardening process. The framework strives to provide no limits in the plugin, acting basically as a provider of useful services. In fact, the security plugin does not have to be entirely programmed in Perl. This may be necessary in a case where the task cannot be adequately done using Perl. One case where this has come up is in using existing COM objects to harden Windows platforms: Perl s COM support is not yet mature. What can be done in this case is to develop an external program in whatever language (Visual Basic, C, C++, etc.), and have that program be executed by the Perl security plugin, feeding it the necessary options, and processing the results of the execution. The security plugin written in Perl would act as a stub of the external program that was executed by the plugin, integrating it into the framework. Even while not imposing limits to what a security plugin can implement, the framework requires that he adapt to a few measures and programming interfaces, so as to maintain integrity with the framework. These can be inferred from the previous sections, but are collected here: The plugin must never interact directly with the user, but through the method calls provided by the interface object, to maintain interface independence; The plugin, if it wants it s actions to be undoable by the user, should successfully register them with the appropriate undo object, before actually executing them. This procedure should be followed for any action which changes the system state or configuration; The plugin will use the manager object for interaction with any other component of the framework (and with other plugins), effectively using it as a factory object; The plugin should, if at all possible, use the portable functions provided by Perl, to

9 maintain maximum portability between platforms; The plugin will require a few basic key/value pairs in it s configuration information. These are discussed in more detail below; The plugin should strive to conform to objectoriented principles such as inheritance, data encapsulation, polymorphism, and so on, easing the reusing of plugins, specially where portability is considered; The plugin must implement a few basic methods, for integration with the rest of the framework, especially in the initialization process. These will be detailed in a following section. The security plugin, for adequate functioning, will probably need some basic information about the environment on which it is being executed. It may obtain this information by itself, but the framework, through the manager object, can provide this information to the plugin. Among the basic information provided are: type of operating system (UNIX or Windows, for ex.), name of OS (Linux, Solaris, Windows NT, etc.), OS version, category, architecture, user ID, group ID, process ID, etc. Also, depending on the which system, specific information can be available, like which distribution of the operating system is being used (if Linux, for example), or what is the service pack level of the OS (if Windows). The plugin can then base many internal decisions on this information, increasing it s portability. As mentioned before, the configuration file contains specific blocks for each plugin, containing the parameters and configuration information for that specific plugin. The security plugin receives the configuration information at initialization, in the form of a hash structure, with key/value pairs. In this hash structure are also contained other subblocks that may have been specified in the configuration file, lending flexibility to a plugin s configuration scheme. How each key/value pair is interpreted is up to the plugin. However, it is common to have at least a few keys which are required to have values, so that the plugin can function. Some keys are required to exist in the configuration block of every plugin, defining basic parameters necessary for the integration with the framework: Active: this key specifies whether the plugin is to be executed or not, and can possess values yes or no ; Source: this key specifies the file which contains the source code to the plugin; Name: this key specifies the name of the plugin it should be the same name used inside the source-code file. Category: this key specifies the category to which the plugin belongs, and is explained in more detail further on. Global options can be provided, but they are always overridden by plugin-specific options, which are in turn overridden by command-line options. On of the basic properties of a plugin, defined in the configuration file, is it s category. The main function of the category is to define whether or not the plugin will be executed on a certain operating system; in other words, it is the plugin s execution scope. It is defined in terms of OS platform, name, distribution and version. In this way, there is enough granularity to define if a plugin should execute or not. A plugin can belong to more than one category. For example, if a certain plugin is known to execute on all Windows NT versions and all Windows 2000 versions, it s category key in the configuration file could be Windows/NT Windows/2000. In the same manner, if a plugin is known to execute on RedHat Linux 7.0 and on Solaris 2.6, it could be Unix/Linux/RedHat/7.0 Unix/Solaris/*/2.6. The * in this case indicates that it works on all distributions, or that there is no distribution. Of course, this label category scheme cannot represent finer distinctions between OS versions, patch levels and such. If a plugin needs this information to know if it should execute or not, these checks should be implemented inside the plugin. 3.7 Program Flow The above sections showed how each main section of the framework behaved. This section aims to give a practical view of how all these work together, defining the program flow. For the execution of a hardening session with PASH, all the above components are tied together by the, for lack of a better name, init module. This module is responsible for the initialization of the software, when the user executes it. The first thing it does is process the commandline options, setting internal variables accordingly. Then, it opens the configuration file and feeds it to the configuration module, which will process it and return the hash structure with all the options. Next, the init module initializes the manager object, and feeds it the necessary configuration information. Based on the configuration information, the init module will then identify the plugins (by their configuration-block labels), load them (using the source key configuration

10 information), and initialize (instantiate) each plugin, passing it it s configuration information and a reference to the manager object. If the initialization passes correctly, the init module will check to see if the plugin can be executed in this platform (checking the category key of the plugin), and execute each plugin (by calling a predefined method), one after the other. This is the basic program flow for a hardening session. As it can be seen, the purpose of the init module is only to kick-start the loading and execution of the pertinent plugins, within the framework. 4 DEVELOPMENT STATUS AND FUTURE WORK The basic implementation of PASH is developing at a rapid pace, with the basic features of the framework (undo, logging, configuration management and text-based interface) already developed. The basic development platform for PASH has been RedHat Linux. As such, it is also the first platform for which a set of security plugins has been developed. It is now possible to use PASH, together with these plugins, to harden a RedHat Linux 6.0, 6.1, 6.2, 7.0 and 7.1 system versions. The basic framework has been tested on other UNIX systems (Solaris, AIX), and should work on any others with Perl support of course, for a complete hardening session, it is necessary to have specific security plugins developed for these platforms too. Current development work is focused on integrating the Windows platform into the framework. This is being approached in two directions. The first is the development of security plugins for Windows NT and Windows 2000, and also for its more popular server software, like the Internet Information Server. The second line of development is the slight adaptation of the basic features of the framework to the platform; for instance, many Windows hardening measures change registry keys, demanding an Undo plugin for undoing registry changes; the Undo attribute plugin also has to be extended, to support Windows file permissions. These type of changes are small and specific, due to the modular nature of the software. A graphical user interface plugin for Windows will be developed only after the main security plugins are ready. The nature and look-and-feel (HTML, native, etc.) of this GUI is not yet defined. When the development of PASH on the Windows platform reaches the same state as that of RedHat Linux, we will release the first publicly available version, under the GPL 5. Future lines of development for PASH appear all the time. The addition of a reporting service to the plugins would be likely candidate for future implementation, allowing the generation of configurable reports, for each hardening session. As new bugs and vulnerabilities are discovered each day, system hardening must be faced as a continuous process. Fitting in this scenario, an interesting development for PASH might be the addition of remote control possibilities to the framework, allowing the efficient administration of system hardening for large-scale networks. The authors hope that the project will go on as fast as it has up to now, specially with the addition of plugin developers and contributors, and be used for other types of security functions, (such as auditing), true to the principle that the mark of a good tool is that it is used in ways that it's author never thought of (Spafford and Kim, 1993). REFERENCES GAMMA, Erich; HELM, Richard; JOHNSON, Ralph; VLISSIDES, John. Design Patterns: Elements of Reusable Object-Oriented Software. Massachusetts: Addison-Wesley, HORA, Evandro Curvelo. Sobre a percepção remota de sniffers para detectores de intrusão em redes TCP/IP. Dissertação de Mestrado. Centro de Informática. Universidade Federal de Pernambuco WALL, Larry; CHRISTIANSEN, Tom; ORWANT, Jon. Programming Perl. 3.ed. New York. O Reilly RAYMOND, Eric S. The Cathedral and the Bazaar. ~esr/writings/cathedral-bazaar/cathedralbazaar/). Linux Kongress. May Enschede, The Netherlands. SPAFFORD, Eugene H.; KIM, Gene H. The design and implementation of tripwire: a file system integrity checker. Technical Report CSD-TR Purdue University. Nov COMPUTER SECURITY INSTITUTE AND FEDERAL BUREAU OF INVESTIGATION. CSI/FBI Computer Crime and Security Survey. Computer Security Institute publication. March

11 A NETWORK MANAGEMENT TOOL FOR WINDOWS 2000 NETWORKS Alessandro Augusto, Jansen Sena, Paulo Lício de Geus Computer Institute IC-UNICAMP Campinas, SP, Brasil {alessandro.augusto, jansen.sena, ABSTRACT Network management on Windows-based machines is considered arduous and challenging. The success of the management process requires automated mechanisms for remote Registry auditing and configuring, through a security communication between workstations and servers. This paper shows the necessity and the advantages brought by the implementation of a system management tool, DoIt4Me, which is designed to reduce the complexity of the administration of a large Windows 2000 networks. The paper also advises to use DoIt4Me with IPSEC, reducing the risk of eavesdropping and spoofing. 1 - INTRODUCTION AND MOTIVATION Local and wide area computer networks have changed the landscape of computing forever. Almost gone are the days when each computer was separate and distinct. Today, networks allow people across a room or across the globe to exchange electronic messages, share resources or even use each other's computers. Networks have become such an indispensable part of so many people's lives that one can hardly imagine using modern computers without them [9]. But networks have also brought with them their share of security problems, precisely because of their power to let users easily share information and resources. Networks allow people from anywhere to remotely do anything that is possible to do locally. It created almost as many risks as they have created opportunities [9]. For good conditions, each organization should have a system administrator team with available time, staff and information to plan network growth, management and security. But this scenario usually does not happen. System administrators are often responsible for a large number of tasks that keeps them permanently busy, i.e. with no available time to manage the computers adequately or to apply a good level of security in each computer. Most network security strategies have focused on preventing attacks from outside the organization's network. Firewalls, secure routers, and token authentication of dial-up access are examples of management attempts to defend against external threats. But hardening a network's perimeter does nothing to protect against attacks mounted from within. In a list of the top ten worst security mistakes information technology people make, the number one is: "Connecting systems to the network before hardening them" [14]. Many system administrators still think that the process to apply security on a network is just to install the last operating system patch (Service Pack), and many of these administrators think that is only necessary to install these patches on the servers, what is a big mistake. System administrators must concern themselves that security must be maintained not only on the servers, but also on each workstation, i.e., every computer on the network must be as secure as possible. However it seems easy, but on Windows environments it is an extremelly complex task. The Windows environments have a reputation to require hands-on, i.e. manual administration. The administrator's physical presence in each machine is necessary every time if configuration is needed. In organizations that have a considerably large Windows network, administrators always have a hard time when they need to manage the whole network. Especially to apply security configurations on each machine in the network. These hardships imply on high monetary costs to maintain a group of system administrators in service and normally take many hours of work. All the Windows configurations are stored centrally in one database called Registry. Besides the hardware and software configuration, the Registry stores the security settings [12]. The modifications on the Registry's value affects the configuration and the status of that computer directly. The goal of this paper is to demonstrate the necessity to have a good system management tool which automates the tasks. Suppose two situations: on the first one, the administrator wants to improve the performance of each network computer. One of the requirements to improve is the necessity to change some configurations about memory on the Registry of each network computer. The second situation, the administrator wants to improve the net-

12 work security configuring each computer. The main goal of this work is to find the answers for: how can the administrator automate the configuration of all these settings without visiting each computer and executing the same job on each machine without interection? How can the administrator execute the whole tasks with one command line? How can a system administrator effectively audit and maintain compliance to security standards (which often changes) on a large Windows 2000 network? Besides automate the tasks, how to do it without eavesdropping? The remainder of this paper is organized as follows. Section 2 presents some related work. The developed system management tool, called DoIt4Me, is presented on section 3. Section 4 shows the necessity to implement a management tool using IPSEC, which provides secure connections between the network computers, providing the protections of integrity, authentication, and confidentiality. Finally, the paper makes some concluding remarks about using DoIt4Me and IPSEC on section 5. 2-RELATEDWORK In the last several years, there has been a large number of papers published about Windows security [6] [7] [8] [10] [15]; nevertheless, Windows networks lack efficient remote administration and management of large sites. Harlan Carvey presents in a framework of a few administrative scripts that had some similar goals to our project. For example, one of his scripts, called regkeys.pl, is devised to collect Registry values from a remote machine. However, these scripts have some weaknesses: they are not scalable to a large network, i.e, it need human interaction for each computer. The default scripts does not audit more than one machine at the same time. Another weakeness of his framework can be shown when the the system administrator wants to configure the value of some Registry keys. There is no script presented on the framework that allows the administrator to remote configure the computers. In our work, remote configure is one of the most important goals. VNC is solution related to this work. VNC stands for Virtual Network Computing. It is, in essence, a remote display system which allows you to view a computing 'desktop' environment not only on the machine where it is running, but from anywhere on the Internet and from a wide variety of machine architectures [16]. VNC allows the administrator to execute remote tasks, however it lacks the automation of tasks. It is necessary to conect to each machine and execute the tasks on each machine at a time. There is no way that the administrator can execute the same task on a set of computers at the same time with VNC. This weakeness makes VNC not a good solution for the goals of this work. 3 - SYSTEM MANAGEMENT TOOL (DOIT4ME) In order to automatically manage a large network, it was necessary to cover the Windows 2000 deficiency of tools for remote automation of administrative tasks, and to scale whatever solution one finds to large numbers of machines. This had to be done with a large amount of configuration flexibility (so it could be tailored to the needs of different machines and administration methods) in a way as automatic as possible. The developed system management tool should have properties such as: simple use and maintenance; be centralized and scalable; be configurable in order to meet specific user needs; capable of enforcing compliance with security policies and standards; reduced overall cost of administration; and require minimal human interaction. It should also scale to a network of any size. To solve all requirements above, it was implemented a new tool, called DoIt4Me [2] [3] [4] [5]. System administrator can customize the DoIt4Me code at any time, because it is implemented in Perl [1]. Its interface has a simple unified syntax and is used through the Windows command line interpreter. The DoIt4Me current options include, but are not limited to: 1. Perform remote auditing of a subset of Registry settings. The administrator only needs to specify what Registry settings he or she wants to audit. 2. Remotely configure a subset of Registry settings. The administrator can modify the Registry settings specifying the new value of each Registry key. 3. Perform service status auditing. The administrator can configure DoIt4Me to audit the status of either all or a set of services. Auditing of specific services are also contemplated, such as "which machines are running the service "schedule"?" 4. Start or stop remote services. The administrator can start or stop any subset of services. For this purpose, he or she needs only to specify theservicenameandtheaction(tostartorto stop it), and the subset of computers to apply these configurations on. 5. Reboot or Shutdown. There is an option where the administrator can reboot or shutdown a subset of workstations. In this option, the administrator can configure the grace period before rebooting, the message to send before rebooting, and the subset of machines to be rebooted.

13 6. Apply permissions on files, folders and Registry keys (ACLs); (module under construction) 7. Ping a list of computers. All the above options can be executed for a set of computers at the same time Installation, configuration, reporting To manage a Windows network with DoIt4Me, is only necessary to install it on the domain controller (DC) and execute it as the domain administrator. There is no DoIt4Me clients running on the workstations. With this system management tool, the administrator can remotely control any subset of machines served by the DC. All the configuration files are stored on the server. The current version of DoIt4Me has less than 10 configuration files. Since DoIt4Me needs to be installed only on the PDC, all these files are also located on the "doit4me/cfg" folderofthepdc. Some example of the configuration files: pclist.cfg: contains the subset of machines that DoIt4Me will scan or configure. The syntax of this file is: configure the name of each computer on each line follow by ;. srvnewstatus.cfg: this file is used to change services status. The syntax of this file is: on each line is specified the service name followed by its new status, i.e., 1 to start the service or 0 to stop the service. Suppose the administrator wants to start the schedule service on the computer_a. In this case, he or she would configure the pclist.cfg with the name of computer_a and configure the srvnewstatus.cfg with schedule; 1. regaudit.cfg: this file is used by DoIt4Me on the auditing option. The administratormustspecifythenameandthepathof the each registry key that he or she wants to audit. regconfig.cfg: this file is similar to the regaudit.cfg, but in this case, this file is used to configure the registry, so the administrator must specify besides the name and the path of the registry keys, the new value that willbeappliedoneachkey. The output produced should be in a format fit for human consumption. The reports enable the system administrator to identify quickly and easily, any problems related to the machines, ranging from a client being down to reporting a subset of machines that are not complying with security policies and standards. Below, Figure 1 shows DoIt4Me interface, Figure 2 shows a Registry audit report and Figure 3 presents the service audit report. DoIt4Me Network Management Tool for Windows 2000 Usage: doit4me.pl <option> Option: <1> Audit Registry keys <2> Configure the Registry <3> Check the status of ALL NT services <4> Check the status of a subset of NT services <5> Change NT services Status (Start/Stop) <6> Ping a subset of workstations <7> Reboot a subset of workstations <8> Help Figure 1: DoIt4Me Interface C:\> DoIt4Me.pl Auditing Report COMPUTER KEY VALUE argentina CSDVersion Service Pack 6 brazil CSDVersion Service Pack 6 paraguai CSDVersion Service Pack 5 argentina DontDisplayLastUserName 0 brazil DontDisplayLastUserName 1 paraguai DontDisplayLastUserName 0 Figure 2: DoIt4Me Registry Audit Report C:\> DoIt4Me.pl Services Status schedule -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- COMPUTER STATUS -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=argentina [Started] brazil [Started] paraguai [Stopped] Figure 3: DoIt4Me Services Audit Report DoIt4Me uses TCP/IP packets to communicate the server with the workstations. The packets are not encrypted, so one problem became apparent during the implementation: eavesdropping. To guarantee security during the communications, DoIt4Me could be used with IPSEC (IP Security Protocol) [13] Practical Examples As a practical example of a security configuration, suppose the system administrator has one Windows 2000 security checklist and wants to audit all the network computers and then, configure the computers that are not in compliance with the security policy. To audit, the administrator must configure the pclist.cfg configuration file with the name of all computers that he or she wants to audit. Besides

14 that, it is necessary to specify the name and the path of all the registry keys that will be scanned on the regaudit.cfg configuration file. Then, executing DoIt4Me with the option 1, the tool will report something similar to the output showed on Figure 2. After auditing, the administrator knows which computers are not in compliance with the configurations that he wants to apply, so now, the administrator must specify on the regconfig.cfg configuration file the name, the path and the new value of each registry key that he or she wants to configure. Also, the pclist.cfg must be configured, so DoIt4Me knows what subset of computers will be remoted configured. Note that in this example we used a security checklist, but it could be a task to remote improve the performance of all the network computers, for example, the necessity to change some configurations about memory on the registry of all the computers. 4-IPSEC Throughout a few decades, computer on the internet were subject to many individual attacks. The solution to these attacks was relatively simple: encourage users to choose good passwords, prevent users from sharing accounts with each other. But this infrastructure has come under attack: Network sniffers have captured the packets passing through the networks as they are transmitted. IP spoofing attacks have been used by attackers to break into hosts. Data spoofing has been used by attackers on a network to insert data into an ongoing communication between two other hosts. Data spoofing has been demonstrated as an effective means of compromising the integrity of programs executed over the network. IPv4 is designed to get packets from one computer to another computer; the protocol makes no promise as to whether or not other computers on the same network will be able to intercept, read or modify those packets in real time. Such interception is called eavesdropping. The only way to protect against eavesdropping in these networks is by using encryption. The need for IP based network security is already great and is growing. The challenge for network administrators is to ensure that the traffic is: safe from data modification while enroute (data integrity); safe from interception, viewing, or copying (confidentiality); safe from being accessed by unauthenticated parties (authentication). Designed by the Internet Engineering Task Force (IETF) for the Internet Protocol, IPSEC supports network-level authentication, data integrity, and encryption [11]. Because IPSEC is deployed below the transport level, network managers (and software vendors) are spared the trouble and expense of trying to deploy and coordinate security one application at a time. No user training is required. By deploying the networks, network managers provide a strong layer of protection for the entire network, with applications automatically inheriting the safeguards [11]. Network administrators and managers benefit from the integration of IPSEC in their networks for a number of reasons, including: Transparency: IPSEC exist below the transport layer, making it transparent to applications and users, meaning there is no need to change network applications on a user's desktop. Authentication: Strong authentication services prevent the interception of data by using falsely claimed identities. Confidentiality: prevent unauthorized access to sensitive data as it passes between communicating parties. Data Integrity: IP authentication headers and variations of hash message authentication code ensure data integrity during communications. Flexibility: the flexibility of IPSEC allows policies to apply enterprise-wide or to a single workstation. One of the great benefits of IPSEC is the ability to protect against both internal and external attacks. Again, this is done transparently, imposing no effort or additional overhead on individual users. 5 - CONCLUSIONS Network management of Windows 2000 environments is a challenging task and it is imperative to automate it as much as possible. It requires a combination of auditing, configuring, security and automation mechanisms. One of the most important tasks of a system administrator is to keep the most current patches for an operating system and installed software. Many of these patches fix security vulnerabilities that are well known to intruders. Unfortunately "Windows systems are not secure by only installing the last Service Pack". Also, in a large network, system administrators must not only apply security on the servers, but also on each workstation. There are several tools available to help the network management, but few freely available solutions for fixing the problems on each network machine.

15 DoIt4Me is a must-have automated management tool for Windows network administration. The current version of DoIt4Me addresses security weaknesses and eases standardization and adherence to Windows network security policies. Our experience has shown that it is possible to remotely manage a large NT and W2K network in a scalable waywithdoit4me. And at a time when network security is increasingly vital, IPSEC makes it easy for network managers to provide a strong layer of protection to their organization's information resources. Combining DoIt4Me and IPSEC, the system administrator provides network managers with a critically important line of security. The facility of DoIt4Me permits the administrator to automate hard administrative tasks with one command line. On the other hand, the flexibility of IPSEC permits easily the network managers to be able to create custom security policies and filters, based on user, work group, or other criteria. 6 - REFERENCES [1] ActiveState WebSite. 24/09/ [2] AUGUSTO, Alessandro. ``Applying Security Configurations to a Large Number of Windows NT Computers Without Visiting Each Machine''. Proceedings of IEEE LANOMS'2001: the Second Latin American Network Operation and Management Symposium. Belo Horizonte, MG, Brazil, August, (in English) [3] AUGUSTO, Alessandro and GUIMARAES, Célio and DE GEUS, Paulo Lício. ``Administration of Large Windows NT Network with DoIt4Me''. Proceedings of SANS 2001: The 10th International Conference on System Administration, Networking and Security, Baltimore, MD, USA, May, (in English) [4] AUGUSTO, Alessandro and GUIMARAES, Célio and DE GEUS, Paulo Lício. ``DoIt4Me''. Accepted to the 1st Tools Demonstration. Proceedings of SBRC 2001: the 19th Brazilian Symposium on Computers Networks, Florianópolis, SC, Brazil, May, [5] AUGUSTO, Alessandro and GUIMARAES, Célio and DE GEUS, Paulo Lício. ``DoIt4Me: a tool for automating administrative tasks on Windows NT Networks''. Proceedings of WSEG'2001: Workshop of Computer Security, Florianópolis, SC, BRAZIL, March, (in English) [6] CARVEY, Harlan, ``System Security Administration for NT``. Proceedings of USENIX LISA- NT: The 2nd Large Installation System Administration of Windows NT Conference, USA, [7] CERT. Windows NT Configuration Guidelines, April, /09/ tech_tips/ [8] DALY, Gregg and BUHRMASTER, Gary and CAMPBELL, Matthew and CHAN, Andrea and COWLES, Robert and DENYS, Ernest and HANCOX, Patrick and JOHNSON, Bill and LEUNG, David and LWIN, Jeff. ``NT Security in an Open Academy Environment''. Proceedings of USENIX LISA-NT: The 2nd Large Installation System Administration of Windows NT Conference, USA, [9] GARFINKEL, Simson and SPAFFORD, Gene. "Practical UNIX & Internet Security". O Reilly & Associates, Inc [10]GOMBERG, Michail and STACEY, Craig and Sayre, Janet. ``Scalable, Remote Administration of Windows NT''. Proceedings of USENIX LISA-NT: The 2nd Large Installation System Administration of Windows NT Conference, USA, [11] Internet Engineering Task Force (IETF). Url: [12]KIRCH, John. "Troubleshooting and Configuring the Windows 95/NT Registry". Macmillan Computer Publishing, [13]Microsoft Windows 2000 Server. "IP Security for Microsoft Windows 2000 Server". White Paper, [14]SANS. ``Mistakes People Make that Lead to Security Breaches''. 24/09/ [15]Trusted Systems Services. ``NSA Windows NT Security Guidelines. Considerations & Guidelines for Securely Configuring Windows NT in Multiple Environments, June, /09/ [16]VNC. Virtual Network Computing. 24/09/2001.


17 PROTEÇÃO DE SOFTWARE POR CERTIFICAÇÃO DIGITAL João Luiz Francalacci Rocha Universidade Federal de Santa Catarina CTC - Departamento de Informática e Estatística CEP Cx.P Florianópolis SC (0xx48) / Ricardo Felipe Custódio, Dr. Sc. Universidade Federal de Santa Catarina CTC - Departamento de Informática e Estatística CEP Cx.P Florianópolis SC (0xx48) / RESUMO Este trabalho é uma proposta para um combate mais eficaz contra cópias não autorizadas de programas de computador. É um estudo sobre infra-estrutura de chave pública com base na recomendação internacional X.509v3 e sua possível aplicabilidade para proteção de software usando a certificação digital. O propósito é condicionar o registro do software ao certificado digital do usuário. Se este mesmo usuário quiser fazer uma cópia pirata e distribuí-la, necessitaria fornecer, junto com a cópia, o seu certificado e sua chave privada, o que poderia trazer inúmeras complicações para ele, pois o certificado digital está associado ao usuário por meio de um contrato e esta associação não pode ser negada. ABSTRACT This paper is a proposal to fight against unauthorized copying of computer software in a more efficient way. It is a study about code signing supporting the international X.509v3 recommendation and how it could be applied for copy protection using digital certification. The purpose is to set conditions for software registration by digital certification which means that the user must inform his/her digital certificate. After the registration done, if the user wants to make illegal copies of the software he/she would have also to distribute his/her digital certificate and private key. Unlikely, once it would bring undesirable and legal consequences because the certificate is associated to the user by a contract and this association can not be denied. 1 INTRODUÇÃO A tecnologia quebrou fronteiras, criou supercomputadores, redes mundiais e software de última geração. Em contrapartida surgiram problemas que variam desde a invasão de privacidade, falta de segurança dos dados até o desrespeito ao direito autoral. Este talvez, um dos mais difíceis problemas que a tecnologia já lidou. Parece ser um vírus sem cura, um labirinto sem saída. Atualmente a pirataria lesa milhares de produtores de software em todo o mundo, causa um prejuízo para empresas e fabricantes na ordem de bilhões de dólares e nada do que tenha sido feito até agora, nem de longe, teve uma eficácia real, nem mesmo as leis federais. O problema vai além dos chips dos computadores, passeia pela consciência do usuário, decola na facilidade de propagação da Internet e aterrissa na não degradação do meio digital. Para resolver o impasse descrito acima, uma nova e mais eficiente forma de combate à pirataria precisa ser desenvolvida. Esta é exatamente a proposta deste artigo, que revela como a certificação digital pode ajudar a proteger o software e os direitos de propriedade. Para que o leitor se beneficie do melhor entendimento possível, este artigo está estruturado da seguinte forma: primeiramente, a pirataria de software, os mecanismos de controle que estão sendo empregados atualmente, as causas e tendências da pirataria e o desrespeito à propriedade intelectual serão abordados para deixar claro que a prática da pirataria tende a ficar fora de controle se investimentos no setor não forem significativos. Depois, um passeio pela tecnologia da certificação digital será necessário e assuntos como: assinatura de código, assinatura digital e infra-estrutura de chave pública serão pincelados, com intuito de proporcionar ao leitor um mínimo de conhecimento global sobre esta nova tecnologia. Por último, o modelo da Proteção de Software por Certificação Digital será apresentado com ênfase na sua estrutura e funcionalidade e com comentários sobre os benefícios que o modelo pode trazer aos produtores de software, não esquecendo de também falar sobre fraquezas que possam existir. 2 UMA OLHADELA NO PRESENTE. O QUE SE FEZ FOI EM VÃO? Qual é a razão da pirataria ser tão praticada? Neste tópico serão expostas as razões que levam as pessoas a exercerem esse ato ilegal e que causa enormes prejuízos às indústrias de software. Veremos as formas mais comuns de combate à pirataria e o panorama atual da prática de cópias ilegais no Brasil. 2.1 A Cultura Cega Igualmente à evolução da informática, a pirataria de software e o desrespeito à propriedade intelectual subiram em proporções assustadoras. Hoje é muito comum adquirir cds piratas no camelódromo da cidade, efetuar download de programas crackeados pela Internet e até mesmo encontrar anúncios de cds piratas nos jornais - vide figura 1. A facilidade e a impunidade são tão grandes que estamos correndo o risco de criar uma cultura negativa irreversível (ou no mínimo difícil de se reverter) nos milhões de usuários de

18 computadores. O pior é que cultura, seja ela boa ou má, passa de geração para geração. Este assunto é bem citado no relatório Global Software Piracy 2000 do SIIA a facilidade de duplicação e a alta qualidade do software pirateado, representa um problema significante para a indústria do software (SIIA, 2000), isto porque não há uma degradação na qualidade do software quando ele é copiado e recopiado. Os fatores que mais contribuem para o aumento desta contracultura são a ganância, falta de cuidado e ignorância para com as leis vigentes e falta de respeito para com a propriedade intelectual. Segundo o relatório Global Software Piracy 2000 do SIIA, nos últimos três anos a pirataria tem crescido muito, principalmente nos países que não possuem mecanismos legais de combate a prática deste crime. Só na América Latina, este prejuízo saltou de US$970 milhões em 1997 para US$1,128 bilhão em Este valor na América Latina, comparado aos prejuízos no resto do mundo, representa pouco menos de 10%, pois a soma chega a US$12,163 bilhões. Um aspecto positivo neste quadro é que estes números não representam totalmente a realidade, uma vez que é levada em conta que para cada software pirateado haveria a compra do software legal, o que não é verdade, pois nem sempre isto acontece. difícil de ser copiado ou pode se tornar uma futura dor de cabeça para o comprador Um Modelo Genérico de Pagamento Eletrônico com Suporte a Múltiplas Transações Comerciais Em 1999, Fu-Shen Ho, Yu-Lun Huang e Shiuh-Ping Shieh conceberam uma proposta para implantação de um modelo genérico de pagamento (Huang, 1999) que visa atender transações eletrônicas para múltiplos comerciantes, onde varejistas, revendedores, autoridades certificadoras e provedores de conteúdo estão envolvidos nestas operações de distribuição de arquivos digitais. O ponto forte do modelo encontra-se no retorno garantido dos proventos relativos ao direito autoral. Ou seja, desde a saída do conteúdo eletrônico do provedor de conteúdo até a chegada do produto ao consumidor, o detentor dos direitos autorais receberia sua parcela, bem como todas as entidades envolvidas na operação de venda (provedor de conteúdo, comerciante, AC e financiadora). O modelo evita a pirataria praticada entre os comerciantes intermediários, pois somente o consumidor poderia decifrar o conteúdo eletrônico. Os autores propõem a implementação do modelo em dois protocolos de pagamentos já conhecidos: o SET 2 e o NetBill Proteção de Cópia para Publicação Eletrônica em Redes de Computadores Figura 1. Publicado no Diário Catarinense, 18 de julho de As Tentativas Atuais para Controlar a Pirataria A indústria do software não está passiva ao problema da pirataria. Pelo contrário, muitas tentativas de proteger o software já foram testadas. O problema é que este assunto é delicado para as empresas que produzem software. As empresas escondem ou procuram não divulgar isto aos seus clientes, pois proteção de cópia pode ser um motivo de queda nas vendas já que o software torna-se mais 1 SIIA Software & Information Industry Association O que os autores deste modelo propõem para combater à pirataria é uma arquitetura com dois esquemas distintos para possibilitar uma distribuição de documentos eletrônicos segura (Choudhury, 1995). A primeira estratégia e a mais segura, requer que os periféricos (vídeos e impressoras) sejam comercializados já com um firmware específico para suporte à criptografia utilizada pelo modelo. A segunda estratégia, mais imediata e viável do ponto de vista econômico, requer que um software seja instalado no computador do usuário. Esta segunda estratégia, porém, é vulnerável a ataques de engenharia reversa, onde o usuário sofisticado, com recursos e bastante paciência, poderia alterar as chamadas de autenticação do sistema e derrubar o esquema de proteção. 2 Secure Electronic Transaction é um famoso protocolo de pagamento proposto pela VISA e pela MasterCard. 3 Da autoria de B. Cox, J. D. Tygar e M. Sirbu, é um sistema para pagamentos de mercadorias vendidas através da Internet.

19 2.2.3 Outras Formas de Proteção de Cópia de Software A proteção de cópia pode vir de diversas formas, é o que diz Ethan Winer em seu artigo The Audio Industry s Dirty Little Secret : Número de série ( cd-key ): A forma mais simples de proteção de cópia requer que você entre com um número de série no ato de instalação do programa. Na prática isto protege muito pouco, já que qualquer um pode emprestar os discos de instalação e o número de série para um amigo (Winer, 2000). Esta forma é a menos punitiva para o usuário. Disquete de proteção: Uma outra forma de proteção seria alienar o software com um disquete de instalação teoricamente incopiável e requerido pelo software a cada nova instalação. Esta forma já apresenta problemas visto que o disquete pode apresentar defeitos e causar aborrecimentos ao usuário. Como exemplo deste tipo de proteção podemos citar o HandProt 4. Contra-senha: A forma de proteção mais comum atualmente obriga o usuário a contatar a empresa produtora do software via telefone ou para obter uma contra-senha e habilitar todas as funções do programa. Dispositivo de hardware ( dinkey ou hardware-key ): A pior forma (para o usuário legal) de proteção - O método mais punitivo de proteção de software usa um dispositivo de hardware que vem com o produto e necessita ser conectado na porta USB ou paralela do computador. Se o dispositivo não estiver conectado, o programa detecta isto e não funciona (Winer, 2000). Como exemplo de empresas que utilizam esta tecnologia podemos citar a Microcosm (www.microcosm.co.uk) e a Griffin Technologies (www.griftech.com). Nenhuma dessas formas de proteção possui uma eficácia total. É comum encontrarmos versões de programas crackeados pela Internet ou em cds piratas. Os crackers fazem uso de decompiladores e da engenharia reversa para retirar as proteções implantadas no programa e lançar as versões crackeadas. 2.3 O Que Incentiva Tanta Pirataria Devemos dar crédito as palavras de Ethan Winer em seu artigo, para uma explicação ao grande incentivo da prática da pirataria: Pessoalmente, eu acredito que o problema real é que o software é exageradamente caro. As pessoas querem fazer a coisa certa e estarão dispostos a pagar por um programa que atenda as necessidades deles se eles tiverem condições de pagar o preço (Winer, 2000). Mas, claro, não é só isto. Como foi citado antes, a ignorância, o descaso, o fácil acesso ao produto pirateado, etc..., são outros fatores que contribuem para o incentivo à pirataria. Poderíamos, então, classificar estes fatores: O alto custo do software. A falta de leis e política de proteção autoral e conseqüente impunidade para os infratores. A falta de uma política educativa. A facilidade de acesso e a não degradação dos produtos piratas (Internet, cds, disquetes 3½ pol., zip drives, etc...). A inexistência de uma proteção eficaz para parar este processo. 2.4 A Pirataria no Brasil Em termos de América Latina, o Brasil ocupa um lugar de destaque no campo da pirataria. Junto com a Argentina e México, chegamos a 2/3 das perdas provocadas pela pirataria na América Latina. No Brasil, o panorama era extremamente grave até antes de Depois de sancionar a Lei de Software, lei número 7.646, em 18 de dezembro de 1987, o Brasil passou a se incluir entre os países que possuem legislação específica de proteção à indústria do software. Após a introdução desta lei, ficou estabelecido que a violação dos direitos autorais de programas de computador é passível de ação cível de indenização. A partir de 1989 a ABES (Associação Brasileira de Empresas de Software) iniciou uma campanha antipirataria no Brasil e três anos depois A Business Software Alliance, uma entidade dos Estados Unidos que reúne os principais produtores de software em nível mundial, uniu forças com a ABES para combater a pirataria, promovendo ações de busca e apreensão em todo o país. Porém, faltava ainda uma maior conscientização da sociedade e faltava melhorar a lei, torná-la mais completa e eficiente, favorecendo e impondo punições, tanto para os fabricantes de software como para os consumidores. Foi, então, que o Congresso Nacional decretou a nova Lei do Software, lei número 9.609, de 19 de fevereiro de 1998 (Lei de Software, 1998), a qual trata com mais rigor a questão de proteção as direitos autorais e garantias ao usuário de programa de computador, inclusive elevando a pena de retenção de um para quatro anos. Apesar de todos os esforços, o Brasil ainda continua com um alto índice de pirataria, assim como no resto do mundo. 4 Produto comercializado pela Squadra:

20 3 ASSINATURA DE CÓDIGO O objetivo deste tópico é proporcionar um entendimento maior sobre assinatura digital, como ela é realizada e como é possível garantir a integridade e autenticidade de um código. 3.1 Benefícios da Assinatura de Código A assinatura de código proporciona ao usuário a segurança de um software lacrado, o que quer dizer que ela assegura autenticidade e integridade. Por autenticidade, ao usuário fica garantido que o código realmente está assinado por quem diz ser o assinante, e por integridade, fica garantido de que o código não foi alterado após ter sido assinado. Além disso, se o software executar atividades maliciosas ou danosas no computador onde foi instalado, o usuário tem recursos legais para processar o assinante do código. 3.2 Tipos de Assinatura de Código Atualmente, existem várias ferramentas e formas de assinatura de código. Pode-se citar como exemplo: o Netscape Object Signing, desenvolvido para os usuários do navegador da Netscape e o famoso programa para computadores pessoais, PGP 5 (Pretty Good Privacy). Este último possui três aspectos negativos quanto a assinatura de código: i) pelo fato do PGP não estar embutido nos navegadores da Internet, a validação da assinatura não pode ser realizada antes do download ser efetuado; ii) o resumo da assinatura não é anexado ao código e iii) o PGP não usa infra-estrutura de chave pública. No entanto, o método que vem obtendo maior aceitação e êxito é o desenvolvido pela Microsoft: o Authenticode. 3.3 Assinatura Digital A assinatura digital surgiu para complementar situações em que não há uma total confiança entre o emissor e o receptor de uma informação. Então, é necessário algo mais do que a criptografia para garantir a autenticidade dos dados. Assinaturas digitais são criadas usando-se chave pública e chave privada. Juntas, estas chaves formam um par pertencente a um único proprietário. A chave pública é disponibilizada para a comunidade, mas a chave privada permanece exclusivamente com o seu dono. O fato é que se uma das chaves é usada para a cifrar, a outra será necessária para decifrar. No caso da assinatura digital, a chave privada é usada para gerar a 5 PGP é um programa largamente difundido que provê, entre outras coisas, serviço de autenticação e confidência. Maiores informações sobre o PGP estão disponíveis em: assinatura, enquanto a chave pública é usada para a verificação. O Authenticode usa esta tecnologia de assinatura digital para garantir a origem e a integridade do software com assinatura digital baseada em chave pública/privada e geração do código resumo, respectivamente. A assinatura do código envolve o cálculo do resumo (veja maiores detalhes no tópico seguinte), o qual é encriptado e adicionado ao código. Se algo no código for mudado, seja um só bit, a integridade estará comprometida. Figura 2. Processo de verificação para um código assinado. O código resumo gerado pelo Authenticode é de 128 ou de 160 bits, dependendo do algoritmo usado no cálculo. A figura 2 ilustra o processo de validação de um código assinado. Este processo é transparente para o usuário que, por exemplo, baixa um código assinado pela Internet. O próprio navegador se encarrega de validar o código, iniciando pela verificação da autenticidade da identidade digital do desenvolvedor, utilizando, para isto, a chave pública da AC que a certifica, ou seja, a identidade digital do assinante do código vem assinada pela chave privada da AC. Para decifrar o código resumo o navegador utiliza a chave pública contida na identidade digital do assinante. Após isto, o mesmo cifrador é usado para gerar um novo código resumo, comparando-o, assim, com o código resumo decifrado. Se os códigos forem idênticos, então a validação foi bem sucedida. 3.4 Função Hash Esta é uma função que mapeia o conteúdo do objeto que está sendo assinado e gera um valor fixo que serve para autenticação. O valor resumo é gerado por uma função r na forma: r = H(Ob) O (Ob) representa um objeto de tamanho variável e o H(Ob) ou r, o valor resumo de tamanho fixo. Todas as funções resumo funcionam com uma entrada, arquivo, mensagem, etc..., que é vista como uma seqüência de blocos de n-bits processados um a um de maneira interativa para produzir a função resumo de n-bits. Atualmente são dois os algoritmos mais usados: o MD5, que já comprovadamente vulnerável, produz um código resumo de saída de

21 128 bits e o SHA-1, o qual produz um resumo de saída de 160 bits e é considerado mais confiável. 4 INFRA-ESTRUTURA DE CHAVE PÚBLICA (ICP) A infra-estrutura de chave pública (PKI - Public-Key Infrastructure) é um conjunto de serviços necessários quando a tecnologia baseada em chave pública é usada em grande escala. Compete aos protocolos operacionais a entrega de certificados e a lista de revogação de certificados (LRC) aos sistemas (aplicações) que necessitam validar assinaturas. Os protocolos de gerenciamento são responsáveis por interações entre diferentes componentes de ICP, estando eles conectados ou não à Internet. Eles também fornecem meios para registro, inicialização, certificação, revogação e recuperação de pares de chaves. Um sistema de distribuição de chaves públicas em larga escala precisa trabalhar com relacionamentos entre múltiplas ACs. A estrutura deste relacionamento varia de acordo com a comunidade usuária, natureza das aplicações que trabalham com certificados, além da área geográfica abrangente. Neste tópico serão abordados temas como Autoridade Certificadora, Autoridade de Registro, Interface Pública, Certificado Digital e recomendação X.509v3, com o intuito de mostrar ao leitor estes componentes que estão por trás de uma infra-estrutura de chave pública. 4.1 AC - Autoridade Certificadora As Autoridades Certificadoras emitem certificados digitais para entidades que precisam se identificar e garantir suas operações no mundo eletrônico. Cada identidade digital emitida é certificada e garantida pela autoridade certificadora responsável pela emissão. De forma prática, poderíamos estabelecer um modelo geral de uma AC (veja ilustração da figura 3) que pode contar com o apoio de outras entidades, como: a Autoridade de Registro e a Interface pública. 4.2 Autoridade de Registro AR A Autoridade de Registro é responsável pela verificação das informações fornecidas pela entidade requisitante do certificado (veja figura 3). Ela atua como um órgão de apoio à AC e em alguns casos pode exigir que o requisitante compareça ao escritório da AR para garantir a veracidade das informações ou até mesmo, terceirizar este tipo de serviço contratando empresas para irem até o requisitante Figura 3. Modelo geral de uma Autoridade Certificadora 4.3 Interface Pública A Interface Pública exerce um papel de interação com o mundo externo ao do modelo geral de uma AC (conforme ilustra a figura 3). Ela provê à entidade requisitante uma forma de requerer os certificados digitais, consultar listas de revogações, etc. No caso de requisição de certificado, a entidade requisitante pode baixar um programa de emissão de certificados para sua máquina local que emitirá seu pedido de certificação em conformidade com os padrões PKCS #10 e PKCS #7 (veja item 4.6), criando um par de chaves (pública e privada). 4.4 Certificados Digitais O certificado digital ou identidade digital é uma forma de credenciamento eletrônico. Ele é emitido por uma entidade terceirizada, chamada de CA, e que estabelece uma identidade para o solicitante. A tecnologia utilizada na identidade digital é totalmente baseada na tecnologia de par de chaves pública/privada, onde a chave é armazenada na identidade digital. A AC depois de emitir o certificado para uma entidade, ela assina digitalmente o certificado, incorporando a assinatura da AC. A AC calcula a assinatura digital produzindo o código resumo do certificado e cifrando-o com a chave privada da AC. Por esta razão as identidades digitais são à prova de falsificação, pois o falsificador não conhece a chave privada da autoridade certificadora e não pode gerar a assinatura correta do certificado. Mesmo que algum campo do certificado seja alterado, isto iria invalidar a autenticação, produzindo um código resumo diferente da assinatura. 4.5 Certificado X.509 (versão 3) O X.509 evoluiu para a versão 3 a partir de junho de 1997 com a conclusão do relatório final da recomendação da ITU-T (ITU-T, 1997). Com as versões 1 e 2, constatou-se que os formatos dos certificados eram deficientes em vários aspectos e precisavam carregar informações adicionais para o padrão tornar-se mais seguro e eficaz. A

22 fundamental mudança foi fazer o formato do certificado e do LRC tornar-se extensível para poder carregar informações como chave e política fornecida, atributos do proprietário e emissor do certificado, restrições de caminho de certificação, etc. Como exemplo disto, poderíamos citar a necessidade de se incluir nas extensões, uma lista de políticas seguidas para a criação do certificado de modo a assegurar que determinado certificado criado para troca de mensagens de não seja usado em transações financeiras (a figura 4 ilustra o formato do certificado X.509 e suas extensões) PKCS #10 v1.7 - Padrão de Sintaxe de Requisição de Certificação Inicialmente concebido para dar suporte à criptografia de mensagens do padrão PKCS #7, o PKCS #10 descreve a sintaxe de pedidos de certificação. Um pedido de certificação consiste de um nome distinto, uma chave pública e um conjunto de atributos (opcional). Este pedido, que deve estar devidamente assinado pelo requisitante, deve ser enviado para uma AC que irá transformá-lo em um certificado no padrão X UMA ABORDAGEM DO MODELO Figura 4. Estrutura do certificado X.509 versão PKCS - Padrão de Criptografia de Chave Pública A primeira publicação do PKCS saiu em 1991 e atualmente este padrão encontra-se amplamente difundido. Ele foi criado pelos laboratórios da RSA e tem como objetivo promover o desenvolvimento de aplicações seguras e outros padrões baseados na criptografia de chave pública PKCS #7 v1.6 - Padrão de Sintaxe de Criptografia de Mensagem O Padrão de Sintaxe de Criptografia de Mensagem surgiu para definir diversas maneiras de se cifrar uma mensagem estando ela com assinatura digital ou não. O uso deste padrão não ficou limitado só para mensagens eletrônicas, mas está sendo usado também em transações eletrônicas (SET - Secure Electronic Transaction) como: pagamentos com cartões de bancos, iniciativa de assinatura digital W3C 6 e outro padrão, o PKCS #12 - Padrão de Sintaxe de Intercâmbio de Informação. A Proteção de Software por Certificação Digital possui um diferencial que se traduz na ligação entre três entidades: o usuário, o certificado digital e o software. Para garantir que esta ligação seja forte o suficiente para inibir a pirataria, uma alta tecnologia é empregada neste esquema de proteção. Assinatura de código, função hash, criptografia, infra-estrutura de chave pública, certificação digital (veja tópico 4) são elementos que reforçam esta teoria. Neste tópico, o modelo proposto será apresentado destacando-se sua funcionalidade e os benefícios que o produtor de software pode obter com ele. As possíveis fraquezas que possam existir também serão discutidas. 5.1 Processo de licenciamento do software Na prática, a requisição dos certificados ficaria a encargo do produtor do software ou da revenda que repassariam os dados para uma AC de confiança. O produtor e a revenda funcionariam então como uma AR (Autoridade de Registro), com a responsabilidade de verificar a veracidade dos dados a serem informados. Este ciclo está ilustrado na figura 5. Numa situação hipotética, poderíamos imaginar que um cliente em potencial baixe direto da página do produtor, determinado software e que decida adquirir a licença de uso. Ou poderíamos imaginar também que o cliente já conheça o software e o tenha adquirido direto de uma revenda. Para ilustrar esta situação, poderíamos seguir o fluxo apresentado na figura 5: 6 World Wide Web Consortium. W3C Digital Signature Initiative. Disponível em Figura 5. Requisição de certificados pelo produtor/revenda

23 Setas 1: O cliente solicita através de formulário específico o certificado digital de licença, comprovando os dados informados. Isto poderia ser feito tanto na revenda onde adquiriu o produto, como direto da página do produtor onde tenha efetuado o download do software. Em ambos os casos, tanto revenda como produtor funcionariam como uma espécie de interface pública, veja item 4.3. Setas 2: Produtor e revenda se encarregam de verificar os dados fornecidos e os submetem a uma AC de sua confiança em forma de requisição de certificado (em conformidade com os padrões PKCS #7 e PKCS #10, veja item 4.6). Seta 2.1: No caso de um pedido pela revenda, esta pode também informar ao produtor a venda efetuada para que este atualize seu banco de dados com informações de mais um cliente registrado. Setas 3: O certificado é confeccionado e então disponibilizado pela AC em um diretório público para que o cliente possa baixá-lo. Setas 4: O certificado pode ser entregue pela revenda também, se isto significar maior comodidade para o cliente. 5.2 Modelo Teórico de Proteção de Software por Certificação Digital Como vimos, o usuário após testar o software, pode adquirir o certificado digital para habilitar todas as funções do software em questão. Ele pode fazer isto através de uma interface pública oferecida pelo produtor ou na revenda de onde obteve a cópia de demonstração do software. No momento da requisição do certificado digital o usuário cria um par de chaves (a pública e a privada), veja tópico 4. A chave pública criada irá constar no certificado digital requerido, enquanto que a chave privada deverá ficar protegida pelo sistema operacional do computador em que o usuário deseja utilizar o software. Quando receber ou baixar de uma interface pública o seu certificado, o usuário poderá importá-lo para o sistema operacional, ficando desta forma, apto a validar o software protegido. Neste modelo, o software protegido faz chamadas para o sistema operacional pedindo pela validação do certificado. A gerência de certificados é então acionada e se encarrega da validação. Este procedimento poderia ser através de uma consulta a um diretório público para verificar se não há revogações do certificado (seta 3c da figura 6) ou caso se trate de uma máquina não conectada à Internet, a simples validação da data de expiração do certificado. Em qualquer uma das hipóteses se a validação falhar, o software não seria acionado. Figura 6. Esquema de proteção de software por certificação As vantagens que a Proteção de Software por Certificação Digital pode oferecer ao produtor de software são grandes, pois além de proporcionar forte proteção de cópia baseada na tecnologia de certificação e assinatura digital, ela fornece um maior controle sobre as licenças de uso vendidas. Com a ajuda da figura 6, podemos comprovar esta vantagem se imaginarmos a seguinte situação: Após um certo número de licenças vendidas, o produtor de software decide fazer uma pesquisa para saber se o número de licenças vendidas se aproxima do número de licenças instaladas no mercado. Esta pesquisa pode ser feita em conjunto ou contratada por órgãos de auditoria. Digamos que a pesquisa revele que o número de licenças instaladas é bem maior que o número de licenças vendidas e que há um número muito grande de licenças especificamente de dois clientes espalhadas pelo mercado. O produtor pode concluir que estes dois clientes quebraram com o termo de contrato 7 do software e desencadearam uma série de cópias ilegais no mercado, distribuindo junto essas cópias, os certificados e respectivas chaves privadas. Diante desta situação, o produtor pode, além de tomar medidas legais cabíveis (baseado na não repudiação da assinatura digital - Stallings, 1999), proceder da seguinte maneira (conforme ilustração da figura 6): Seta 1: O produtor solicita à AC que revogue os certificados pertencentes aos 2 clientes em questão. Seta 2: A AC publica na lista de revogação de certificados (LRC) os respectivos certificados. Seta 3a: Toda vez que o software protegido for acionado, a validação irá ocorrer. Esta será feita através da gerência de certificados, que se encarrega de verificar se o certificado expirou ou... Seta 3b: se o certificado está incluído na LRC local do Windows ou... 7 O termo de contrato é de responsabilidade do produtor e deve estar em concordância com a lei e deve explicitar ao usuário que a concessão do certificado de licença está condicionada a aceitação de todos os termos.

24 Seta 3c: se o certificado está incluído na LRC publicada pela AC. Para entender melhor este processo exposto pelas setas 3a, 3b e 3c, veja maiores detalhes no item 5.3. Seta 4: O produtor pode ainda, oferecer mais uma chance aos clientes que romperam o contrato, fornecendo-lhes novos certificados. Vale observar que no momento em que o certificado é incluído na LRC, todas as máquinas com cópias ilegais e que estejam ligadas à Internet deixarão de funcionar, uma vez que a consulta a esta lista será feita (com o consentimento do usuário, veja item 5.3.3). As demais máquinas com cópias ilegais e que por ventura não tenham conexão com a Internet, terão seus certificados espirados naturalmente pela data de validade do certificado. 5.3 Processo de Validação da Licença de Uso do Software Para um melhor entendimento, o Processo de Validação da Licença de Uso do Software será apresentado em três partes distintas. A primeira parte tratará desde o processo de autenticação do Windows, até a assinatura do nounce, um número aleatório gerado a partir de um desafio lançado para o usuário e que tem como objetivo ser assinado digitalmente; a segunda parte cuidará da validação da assinatura feita e a terceira e última parte será responsável pelo controle de validade do certificado e pela consulta às LRCs (local e remota) Processo de Validação da Licença de Uso do Software Parte 1: Este estágio começa quando o usuário efetua a autenticação ( logon ) de sua seção no Windows, preparando a liberação da chave privada do usuário que se encontrava cifrada em um repositório. Depois, quando o usuário aciona o software protegido, este verifica na coleção 8 de certificados do usuário a presença de algum certificado com extensão que indique seu uso para a execução do software protegido, além de verificar se este certificado contém uma chave privada correspondente. nounce com a chave privada correspondente ao par de chaves do certificado de licença. Se no momento da instalação deste certificado pelo Windows, o usuário optar por informar uma senha de autorização de uso da chave privada, então no ato da assinatura digital será necessário informar esta senha para que a chave privada possa ser usada na assinatura do nounce. A assinatura é feita através de uma função hash (veja item 3.4) que gera um resumo que é cifrado com a chave privada. Figura 7. Processo de validação da licença de uso do software parte 1 A figura 7 ilustra bem os eventos descritos nesta etapa. Note que a requisição da senha para uso da chave privada é o único evento que requer uma interação com o usuário. O restante do processo é passado desapercebido Processo de Validação da Licença de Uso do Software Parte 2: O nounce" assinado na fase anterior será aqui verificado com a chave pública do certificado de licença, que está contida no próprio certificado. Desta operação, será gerado um resumo decifrado que deverá ser comparado com o resumo gerado por um novo cálculo hash como no estágio anterior. Se os resumos forem exatamente iguais, significa que o usuário possui a chave privada correspondente ao par de chaves do certificado de licença e tem autorização para continuar com o processo de validação do software. Se a comparação dos dois resumos apresentar uma mínima diferença, o processo de validação encerra imediatamente, bem como a execução do software protegido. Com o certificado de licença e a chave privada disponível, o software realiza então o que seria a criação de um desafio para o usuário. Neste desafio, o software protegido cria um nounce (número aleatório qualquer não repetitivo) e assina este 8 Um usuário pode ter mais de um certificado, para várias finalidades, geralmente armazenados na pasta my store do perfil do usuário no Windows, formando assim, uma coleção de certificados. Figura 8. Processo de validação da licença de uso do software parte 2

25 A figura 8 ilustra esta fase de verificação da assinatura, que continua transparente para o usuário Processo de Validação da Licença de Uso do Software Parte 3: Ao chegar nesta etapa, significa que o usuário: 1) possui a chave privada do par de chaves do certificado de licença de uso do software e 2) eventualmente, conhece a senha de uso da chave privada do par do mesmo certificado. O próximo passo agora, seria saber se este certificado continua válido. Dessa maneira, o primeiro procedimento que o software protegido deverá fazer é verificar a data de validade do certificado de licença. Se a data expirou, todo o processo pára aqui. Caso contrário, o próximo passo seria consultar a LRC local do Windows para ver se a LRC local não expirou. Sendo ainda uma LRC válida, é preciso verificar se o certificado de licença consta nesta lista. Se o certificado foi revogado, é necessário parar o processo de validação. Por outro lado, se a revogação do certificado de licença não consta na LRC local do Windows ou mesmo se a LRC local se encontra expirada, efetua-se o próximo procedimento que é uma consulta a uma LRC remota e mantida por uma AC. Se a consulta acusar que o certificado de licença foi revogado, interrompe-se todo o processo de validação. Se nada constar, aí sim o software protegido é definitivamente liberado para uso. Pela ilustração da figura 9 podemos ver os procedimentos finais de validação do software protegido: validação da data de expiração do certificado de licença e da data de expiração da LRC local do Windows, bem como consulta local e remota de revogação de certificado de licença. Vale observar que desde o momento da instalação do software protegido, pode-se deixar a opção de consulta remota à LRC configurável pelo usuário, preservando-se assim, o direito à privacidade. Figura 9. Processo de validação da licença de uso do software parte 3 Apesar do Processo de Validação da Licença de Uso do Software ser um pouco extenso e aparentemente complicado, é bom lembrar que ele é praticamente transparente ao usuário e bastante rápido até o momento da consulta remota a uma LRC, uma vez que a criptografia realizada aqui é em cima de um pequeno código de resumo gerado. Quanto à consulta remota a uma LRC, isto dependeria da velocidade presente na rede. 5.4 Fraquezas e Limitações do Modelo Se por um lado o modelo traz vantagens para o produtor de software no que tange ao controle de licenças vendidas, por outro lado, para viabilizar todo este processo de licenciamento de software, o produtor, naturalmente, precisaria criar recursos dentro de seu modelo administrativo, como por exemplo: criação de novos departamentos, contratação de pessoal, treinamento e busca de parcerias com revendas e Autoridades Certificadoras. As revendas e ACs também necessitariam adaptar-se a este novo modelo. Claro que cada empresa poderia elaborar e implementar seu próprio processo de licenciamento de software, mas basicamente ele giraria em torno do que foi ilustrado na figura 5, pg. 6. Para suprir algumas deficiências, modelos como Um Modelo Genérico de Pagamento Eletrônico com Suporte a Múltiplas Transações Comerciais (Huang, veja também item 2.2.1, pg. 2) poderiam ser combinados com esta forma de proteção de software, fortalecendo o sucesso comercial dos produtos vendidos e minimizando o trabalho administrativo necessário. Como e qual a melhor maneira de se implementar este processo, bem como o impacto causado na definição de um modelo de gestão para as empresas que adotarem esta forma de proteção de software, são assuntos que não serão discutidos neste artigo. Também é claro, existe o problema dos computadores que não estão conectados à Internet e que por causa disto, não propiciam uma consulta às listas de revogação de certificados. O usuário pode também, tentar burlar a validação do período de validade do certificado alterando a data do computador. Em contrapartida, apesar da alteração da data do computador ser um processo simples de ser feito, este, pode se tornar cansativo com o tempo, além de comprometer outras aplicações que dependam diretamente da data, como: agendas eletrônicas e demais serviços. O modelo não é 100% a prova de falhas. Se o usuário do modelo for bastante sofisticado, obtiver recursos consideráveis, tempo, paciência e dominar o campo da engenharia reversa, ele pode quebrar o sistema de proteção e retirá-lo, gerando um novo executável. Como alternativa a esta ameaça, a assinatura de código poderia ser usada para assinar este executável, condicionando a sua execução somente quando este se encontrar íntegro. Seria infinitamente mais trabalhoso para o usuário sofisticado quebrar a criptografia do resumo gerado.

26 6 CONSIDERAÇÕES FINAIS O combate à pirataria sempre foi um trabalho árduo e ineficaz para os desenvolvedores de software. Apesar dos esforços feitos para se proteger o software, ainda não foi encontrada uma forma eficiente de se fazer esta proteção. De acordo com o relatório de 1999 da SIIA (SIIA, 2000), a pirataria ainda lesa as empresas e desenvolvedores de software em todo o mundo, na ordem de bilhões de dólares por ano. A certificação digital é hoje uma realidade que conta com a força da criptografia como forma de assegurar a inviolabilidade do certificado. Existe ainda todo um padrão bem elaborado de emissão de certificados, hierarquias entre ACs e validação de caminhos de certificação que dão suporte a certificação digital, assegurando um futuro certo de crescimento e aceitação global deste novo nicho de mercado. O amparo legal também tem acompanhado esta evolução já sendo presente na legislação de muitos países. Isto garante uma grande importância ao certificado digital como instrumento de prova em operações comerciais pela Internet. Conciliar o certificado digital à venda de software é apostar numa proteção que tem grandes chances em dar certo. O usuário comprador certamente não iria gostar de assumir as penalidades resultantes em liberar a chave secreta de seu certificado para piratear o software. Este encargo de responsabilidade além de sensibilizar a consciência do usuário é prova evidente da procedência da licença do software. OBSERVAÇÕES Os autores se reservam no direito de alterar as informações contidas neste documento sem prévio aviso. Nenhuma parte deste documento pode ser reproduzida ou transmitida em qualquer forma ou qualquer meio, eletrônico, manual ou mecânico, para qualquer propósito, sem a permissão expressa e por escrito dos autores. As marcas referenciadas neste documento são de seus respectivos proprietários. Os nomes e figuras mencionados e mostrados aqui são para mera exemplificação. ITU-T International Telecomunication Union. Recommendation X.509 (1997) ISSO/TEC :1993, Information Technology - Open Systems Interconnection - The Directory: Authentication Framework. ITU-T International Telecomunication Union ITU-T International Telecomunication Union, LEI DE SOFTWARE, Lei n 9.606, de 19 de fevereiro de Congresso Nacional, Brasil, SIIA Software & Information Industry Association. SIIA's Report on Global Software Piracy SIIA Software & Information Industry Association, EUA, STALLINGS, William, Cryptography and Network Security. Principles and Practice. Prentice-Hall Inc. New Jersey, EUA, WINER, Ethan, Copy Protection - The Audio Industry's Dirty Litlle Secret. PROREC, EUA. Disponível em: <http://www.prorec.com>, acessado em: 18/07/2000. BIBLIOGRAFIA ADICIONAL FEGHHI Jatal, FEGHHI Jalil and WILLIAMS Peter, Digital Certificates - Applied Internet Security. Addison Wesley Longman, Inc. Massachusetts, EUA, GARFINKEL, Simson L. and SPAFORD, Gene, Comércio e Segurança na WEB. Riscos, Tecnologias e Estratégias. Market Books do Brasil, São Paulo, KALISKI, Burt, PKCS #10 v 1.5: Certification Request Syntax. RSA Laboratories East. MA, EUA, KALISKI, Burt and KINGDON, Kevin W., Extensions and Revisions to PKCS #7 v 1.6. RSA Laboratories East. MA, EUA, MICROSOFT Corporation, Ensuring Accountability and Authenticity for Software Components on the Internet. Microsoft Authenticode Tecnology, Redmond, WA, EUA, ROCHA, João L. F., Proteção de Software por Certificação Digital. Trabalho Individual. UFSC, SC, Brasil, Disponível em: <http://www.inf.ufsc.br/~jrocha>, acessado em: 20/09/2001. REFERÊNCIAS BIBLIOGRÁFICAS CHOUDHURY A. K., MAXEMCHUK N. F., PAUL S. and SCHULZRINNE H. G., Copyright Protection for Electronic Publishing over Computer Networks. IEEE Network Magazine, HUANG Y.L., SHIEH S.P. and HO F.S., A Generic Electronic Payment Model Supporting Multiple Merchant Transactions. Computers and Security, 1999.

27 Designing Reliable, Robust and Reusable Components with Java Exceptions Gisele R. M Ferreira Lucas C. Ferreira Institute of Computing University of Campinas (Unicamp) Rua Albert Eistein 1251 C.P CEP Campinas - Brazil ABSTRACT Exception handling is a structuring technique that facilitates the design of fault tolerant components by providing a suitable scheme to detect and handle errors. Although the rising importance of exception handling is evident we have noted that programmers are usually not sufficient able to effectively define and handle exceptions. Indeed, little information is available to help the designers and programmers in the appropriate use of exceptions. This work presents guidelines and tips on when and how to use exceptions and gives several examples of good exception usage. We also present the Ariane 5 catastrophe as an example of problems to represent contractual obligations between components and to imprecisely specify reusable components. We would like to help programmers and designers to avoid potential errors and perhaps realize truly robust exception handling. 1 INTRODUCTION The modern society is so dependent on the provision of computer services, it is hard to imagine our society without them. Different applications have different kinds of dependability requirements, such as reliability and high availability. Reliability is a component's ability to perform according to its specification. Availability is the percentage of time of which the system is delivering its service. So, to achieve those requirements, modern computer systems must be able to react to inputs not included in their specification, such as failures. Therefore, computer systems must be fault tolerant. Exception handling is a structuring technique that facilitates the design of fault tolerant components by providing a suitable scheme to detect and handle errors. Many object-oriented languages such as C++, Java, Ada, Eiffel and Smaltalk have exception handling mechanisms (exception mechanisms to be short) as one of their features. Each language s exception mechanism has different characteristics but, essentially, they represent errors as exceptions, define handlers to deal with them and use a exception handling strategy when one is detected. Although the rising importance of exception handling is evident we have noted that programmers are not sufficiently prepared to effectively define and deal with exceptions. Indeed, little information is available to help the designers and programmers to appropriately use exceptions. They make a variable range of errors, from how and when using exceptions until who should handle an exception occurrence. In [Martin and Murphy., 2000], the authors mention that the lack of information about how to design and implement using exceptions leads to complex and spaghetti-like exceptions structures. We can complete their words with the fact of this lack of information contributes to construct less reliable and robust component. We will demonstrate this statement in this paper. In Java, until a handler is not found, the exception is propagated automatically to levels up in the call chain. This automatic exception propagation adds more complexity in the exception design than the existent. Most of the time, the exception path becomes an intractable task for developers and consequently the handlers designed are not efficient enough to handle the exception. Exceptions have to be handled with care since the program state can be inconsistent. The normal continuation of the program in this situation can lead to additional exception occurrences and ultimately to a program failure [Cristian, 1989]. Beyond this problem, the Java compiler permits to raise a kind of exception, descendant of the RuntimeException class, without requiring a handler to it. With this possibility, programmers tend to raise exceptions without worrying how they should be handled. This paper aims to give guidelines and tips on when and how to use exceptions and gives several

28 examples to illustrate it. We also present the Ariane 5 catastrophe as an example of problems to represent contractual obligations between components and to imprecisely specify reusable components. We would like to help programmers and designers to avoid potential errors and perhaps realize truly robust exception handling. This document is organized as follows: in Section 2 we present what made the Ariane 5 launcher explode. In Section 3 we present the Java exception handling mechanism, including the two types of Java exceptions, named checked and unchecked. In Section 4 and 5 we suppose that Ariane 5 was implemented in Java using unchecked and checked exceptions respectively and we explore what could happen in both cases. In Section 0 we give guidelines of when to use checked or unchecked exceptions. If unchecked exceptions are chosen, the developers must take care with some points that we mention in Section 7. In Section 8 we present two alternatives to reuse non-dependable components. Finally, section 9 concludes this paper. 2 THE LESSONS OF ARIANE Recently, in 1996, the world witnessed 500 million dollars blowing up. A software error caused the European Ariane 5 launcher to explode about 40 seconds after takeoff. An exception was raised during a conversion from a 64-bit floating-point value to a 16-bit signed integer. There was no explicit exception handler to catch that exception, so it was caught by a generic handler used for uncaught exceptions. Since a generic handler is not able to efficiently handle the exception, and the entire software crashed. The cause of this catastrophe was a complete reuse of 10 year-old software designed for the Ariane 4. The analysis for the Ariane 4 trajectory concluded that this overflow could not occur but, unfortunately, the same was not true for the Ariane 5. This constraint was in an obscure part of a mission documentation but was nowhere in the code itself. Because of that, this trivial error was kept and the system crashed, hence the Ariane mission. This episode highlights that both abnormal and normal system behavior must be explicitly documented and represented during the software lifecycle [[delemos and Romanovsky 2000], [Avizienes, 1997], [Martin and Murphy., 2000], [Ferreira at al., 2001] ]. Since exceptions are expected to occur rarely, the exception code of a system is in general the least documented, tested, and understood part of the computer system [Cristian, 1989]. To safely reuse a module, it must be equipped with clear specification of its operating conditions. This is part of the principle of Design by Contract [Meyer, 1997a] where interfaces between modules of a software system - especially a mission-critical one - should be governed by precise specifications. The contract must cover mutual obligation (preconditions) and benefits (postconditions). Therefore, only specifying the contract in documentation is not sufficient. This is clear in the Ariane episode where the constraints were present in a mission document but were not checked. The programming language must support some mechanism that will put the specification in the software itself [Meyer, 1997b]. Probably, the Ariane system crash would have been avoided if the programming language supported some mechanism that could automatically verify contract violations during testing. In Java, there are not such mechanism to express the obligations and benefits of the contract. So, the programmer must have more attention and explicitly define the contract by condition tests that may raise exceptions if something goes wrong. Java has different types of exception, which are named checked and unchecked. Even though we know that in Java the error present in the Ariane software would be detected by the compiler, we will discuss what could have happened if Ariane software had been implemented in Java and we will also discuss the results when either checked or unchecked exceptions are used to signing the contract violation. 3 JAVA EXCEPTIONS AND EXCEPTION HANDLING An exception occurrence is synonymous to the impossibility of delivering the service specified by the component. If a component detects that it cannot provide the service requested it raises an exception. When the exception is detected, an exception handling mechanism is responsible for finding out the appropriated handler to deal with it. Firstly, a handler is searched within the component which raises the exception. If the component does not have a handler or the handler is not able to efficiently recover the system, the exception is signaled to the caller. Sometimes, it's appropriate to catch exceptions within the component. In other cases, however, it's better to let a method further up the call stack handle the exception since this can give more flexibility to the component. In Java terminology, creating an exception object and signaling it is called throwing an exception. An exception is thrown with the Java throw statement. Every exception must have a associated handler. The first step in constructing an exception handler is to enclose the statements that might throw an exception within a try block. The try statement defines the scope of its associated exception handlers. If an exception occurs within the block, that exception is handled by the appropriate handler with this try

29 statement. Exception handlers are associated with a try statement by providing one or more catch blocks directly after the try block. Look for the exampleinfigure1. Try { throw y; Try { throw e; } catch (Exception e) { } throw z; } catch (Exception y) { } catch (Exception z) { } Figure 1 Throwing and catching exceptions in Java Only objects that derive from the Throwable class or from its descendants can be thrown. The diagram below illustrates the class hierarchy of the Throwable class and its most significant subclasses. Object Error Throwable Exception RuntimeException Figure 2 Java throwable hierarchy The Throwable has two direct descendants: Error and Exception. Error is intended for dynamic linking failures or some other "hard" failures in the virtual machine which would be reported by the virtual machine itself. Typical Java programs cannot catch an Error. In addition, it's unlikely that typical Java programs will ever throw an Error either. On the other hand, typical Java programs should throw and catch objects that derive from the Exception class. An Exception indicates that a problem occurred but that the problem is not a serious systemic problem. One Exception subclass has special meaning in the Java language: RuntimeException. The RuntimeException class represents exceptions that occur within the Java virtual machine (during runtime). An example of a runtime exception is NullPointerException, which occurs when a method tries to access a member of an object through a null reference. The Java packages define several RuntimeException classes that can be caught just like other exceptions but are not required to. In addition, RuntimeException subclasses can be created in typical Java programs although it is not recommended. Section 0 will discuss when and how to use runtime exceptions. A RuntimeException and any of its subclasses are called unchecked exceptions and the other subclasses of Exception (or Exception itself), excluding class RuntimeException and its subclasses, are checked exceptions. Section 3.1 is dedicated to discussing the difference between checked and unchecked exceptions. 3.1 Checked vs Unchecked Exceptions If a method throws an exception, it can catch it or raise it to the caller. An exception will be raised if the method is not able to handle it by itself. If the method throws a checked exception (and don't catch it), it will need to declare the exception in its public interface. Clients of this method must either catch and handle the exception within their body or declare it in their throws clause. The use of checked exceptions forces client methods to deal with the possibility that the exception will be thrown. Otherwise, if an unchecked exception is thrown, client methods can decide whether to catch the exception. With an unchecked exception, the compiler doesn't force client methods to catch the exception or to declare it in a throws clause. The following example in Figure 3 presents a class that calls two methods from Java packages that can throw checked and unchecked exceptions.

30 // Note: This class won't compile by design! import java.io.*; import java.util.vector; public class ListOfNumbers { private Vector vector; private static final int size = 10; public ListOfNumbers () { vector = new Vector(size); for (int i = 0; i < size; i++) vector.addelement(new Integer(i)); } } public void writelist() { PrintWriter out; out = new PrintWriter(new FileWriter("OutFile.txt")); for (int i = 0; i < size; i++) out.println("[" + i + "]=" + vector.elementat(i)); out.close(); } Figure 3 Example of using exceptions in Java Upon construction, ListOfNumbers creates a Vector that contains ten Integer elements with sequential values 0 through 9. The ListOfNumbers class also defines a method named writelist that writes the list of numbers into a text file called OutFile.txt. The writelist method calls two methods that can throw exceptions. First, it invokes the constructor for FileWriter, which throws an IOException if the file cannot be opened for any reason. Second, the Vector elementat() method throws an ArrayIndexOutOfBoundsException if you pass in an index whose value is too small (a negative number) or too large (larger than the number of elements currently contained by the Vector). If you try to compile the ListOfNumbers class, the compiler prints an error message about the exception thrown by the FileWriter() constructor, but does not display an error message about the exception thrown by elementat(). This is because the exception thrown by the FileWriter() constructor, IOException, is a checked exception and the one thrown by elementat(), ArrayIndexOutOfBoundsException, is an unchecked exception. To compile ListOfNumbers class, you have to handle the checked exception, IOException (Figure 4), or specify it at the interface of the method writelist() by the throws clause (Figure 5). public void writelist() { Try { out = new PrintWriter(new FileWriter("OutFile.txt")); } catch (IOException e) { System.out.println("IOException"); } Figure 4 Catching an IOException

31 public void writelist() throws IOException {...} Figure 5 Throwing an exception not handled internally 4 ARIANE VS UNCHECKED EXCEPTIONS Suppose that the Ariane 4 was implemented in Java and the contract violation raised an unchecked exception UnckOverflowException. The method convert tests if its parameter is longer than the maximum permitted. If it is an unchecked exception UnckOverflowException is raised. See Figure 6. As explained in section 3, if the method throws an unchecked exception the compiler doesn't force client methods to catch the exception, or declare it in a throws clause. In other words, client programmers are not forced to put the clauses try and catch every time they invoke the convert method. We consider this a loophole in Java's exception handling mechanism as lazy programmers are tempted to make all exceptions runtime exceptions. In our experience, we noted that some programmers are adopting the use of unchecked exceptions when the abnormal condition is a failure of contractual obligations (preconditions). According to them, the client should know and fulfill the contract before requesting the service. Moreover, they justify that they cannot force client programmers to deal with these exceptions every time they invoke the method. In general, this is not recommended because, while this may seem convenient to the programmer, it assumes that the client had complete knowledge about the contract. Sometimes, the client is not fully aware of all the contract constraints and, as stated earlier, the contract must be explicitly defined in the software itself. If an unchecked exception is raised, the client may not realize that something is wrong and consequently has no chance of recovering from the error. Note that the interface of the method convert says nothing about the exceptions it raises. Unless the client reads the method code, he cannot imagine that the UnckOverflowException exception could be thrown and that he should have a handler to catch it. If this exception is not caught by the caller, it will be automatically propagated up in the call stack. However, this exception would be meaningless for this class and continue propagating without a handler until it hits the highest method in the call stack and the system exits abnormally. This is one reason why automatic exception propagation, adopted by Java exception mechanism, is strongly criticized by several specialists [Ferreira at al., 2001] and [Garcia at al, 1999]. 5 ARIANE VS CHECKED EXCEPTIONS Suppose now that the contract violation raised a checked exception CkOverflowException. The method convert tests if its parameter is longer than the maximum permitted. If it is a checked exception CkOverflowException is raised. See Figure 7. public class UnckOverflowException extends RuntimeException {...} public Integer convert {num: DOUBLE) { if num > maximum_bias { throw new UnckOverflowException (); }... } Figure 6 Throwing an unchecked exception UnckOverflowException

32 public class CkOverflowException extends Exception {...} public Integer Convert {horizontal_bias: DOUBLE) throws CkOverflowException { if horizontal_bias > maximum_bias { throw new CkOverflowException (); }... } Figure 7 Throwing a checked exception CkOverflowException As explained in section 3, the methods which raise checked exceptions must declare them in their public interface. Moreover, the compiler forces clients of those methods to catch and handle the exception within the body of their methods, or to declare the exception in their interface. Then the convert method must specify that it throws the CkOverflowException (Figure 7) and its client, the clientofconvert, will have one of the constructions shown in Figure 8. In the first construction, the method clientofconvert tests if it service could be provided putting the request within the try block. If the service could not be provided, the raised exception, CkOverflowException, will be caught and handled by it. In this example, handling the CkOverflowException means printing a message on the screen. In the second construction, the method clientofconvert considers that it is not able to deal with this exception and decides to Public void clientofconvert() { try {l.convert(m);} catch (CkOverflowException e) { System.out.println("Overflow exception"); }; } raise it to its caller. Since CkOverflowException is a checked exception, the compiler obligates the method to specify the exceptions that can be thrown in its interface. In fact, the method's public interface may include more than just the exceptions that can be thrown directly by the method. It also includes exceptions that are thrown indirectly by the method through calls to other methods. In summary, the throws clause includes all checked exceptions that can be thrown while the flow of control remains within the method. With this Java requirement, any checked exception that can be thrown by a method is really part of the method's public programming interface: callers of a method must know about the exceptions that a method can throw in order to intelligently and consciously decide what to do about those exceptions. OR Public void clientofconvert() throws CkOverflowException { l.convert(m); } Figure 8 Two construction of the clientofconvert method

33 6 WHEN TO USE CHECKED OR UNCHECKED EXCEPTIONS When to use checked or unchecked exceptions will depend on the goal of your project. Sometimes, the cost of checking exceptions can exceed the benefit of catching or specifying them. However, sometimes, efficiency is not as important as reliability and robustness. If you would like to construct robust, reliable and reusable components, you ought to be sure that the contract is explicit in the code itself. These components must explicitly define all exceptions they can raise, direct or indirectly, in order to permit clients to decide what to do about those exceptions. Since we cannot trust in the discipline of programmers in putting unchecked exceptions in the method's public interface, we have to use only checked exceptions. Therefore, if you prefer making your code more efficient than reliable, you can use unchecked exceptions. When using unchecked exceptions, you should keep on mind that you may be avoiding declaring the exceptions the method can throw. In others words, you are not fully documenting the method s behavior. Hardly ever this can be good or simply harmless. We will give some rules we consider feasible for using unchecked exceptions. Some of these rules were extracted from the Java Tutorial [JavaTutorial] and the others are our suggestions. A method can detect and throw a RuntimeException when it's encountered an error in the virtual machine runtime. However, it's typically easier to just let the virtual machine detect and throw it. Similarly, you create a subclass of RuntimeException when you are creating an error in the virtual machine runtime (which you probably aren't). Do not throw a runtime exception or create a subclass of RuntimeException simply because you don't want to be bothered with specifying the exceptions your methods can throw. Our rules are: Use unchecked exceptions if you are sure that your client has complete knowledge about all the contract constraint. For example, we consider viable the exception ArrayIndexOutOfBoundsExcepti on being unchecked. Methods of accessing array elements are frequently requested and test its limit all times may be costly. Moreover, programmers have sufficient familiarity with arrays and certainly know how they work. Following Lee and Anderson s terminology [Lee e Anderson, 1990]., a component signals internal exception when it has intention to handle it by itself and raises external exception if it determines that for some reason it cannot provide its service. Within the component you are designing, it is supposed that you have complete control of exception propagation. In this case we consider that unchecked exception can be used. This means using unchecked exceptions in classes that are not in the boundary of the component and in methods that are not part of the component public interface. You can also use unchecked exceptions if you are signaling implementation errors that you are sure that it will be identified by tests, although this is very tricky to get right and should be avoided. 7 HOW TO USE UNCHECKED EXCEPTIONS We have already mentioned that the client must be informed about all exceptions a method can throw to efficiently handle them. Therefore, if you decide to use unchecked exceptions, extra care is necessary to make explicit all exceptions raised by your method. This can be done by declaring them in the method's public interface. Although this can reduce the risk, it is not a guarantee that things will work as desired. As we know, when an unchecked exception is raised, the client is not obligated to catch it or to declare it in its method's interface. Therefore, all programmers must be concerned in explicitly declaring exceptions or the effort will be lost. This is a difficult task because the compiler does not help and the success depends only of the discipline of the programmers. How can we trust in discipline of lazy programmers who use unchecked exceptions to save the work of checking if the contract was fulfilled? Furthermore, putting together automatic exception propagation and lack of handler for

34 unchecked exceptions can result in an incomprehensible exception propagation model. You will probably lose the control of the exception propagation path and consequently you will not able to efficiently recover the system. In order to confine the error propagation and efficiently handle the exception, all exception handlers should be in the caller. B' b:b m2 throws E() public void m2 throws E() { b.m2()... } B m2() 8 HOW TO REUSE NON-DEPENDABLE COMPONENTS We now discuss some strategies to reuse third part components A and B that were not implemented to be fault tolerant. In the example in Figure 9, component A, implemented by class A, requests some services which component B, implemented by class B, provides by its interface m2. However, the method m2 of the component B throws an exception E that is not explicitly declared on its interface (of course an unchecked exception). Therefore, A does not know which exceptions can be thrown and consequently does not have handlers for them. When the exception E is thrown, no handler is found in the caller (component A) and the exception is automatically propagated until it reaches a generic handler that cannot efficiently deal with it. A a1 : B m1() public void m1() { a1.m2()... } B m2() Figure 9 Two third part components nonfault tolerant In order to reuse these components, we propose two alternatives that are described in sections 8.1 and 8.2. A strategy shared by the two alternatives is to explicitly representing all exceptions a method can throw. So, method m2 of component B should declare the exception E on its throws clause. However, B is a black box component, i.e., we have no access to its code. So, the interface of method m2 cannot be changed. A solution for this problem is to construct a wrapper B that redefines B s interface and exhibits the exception that can be raised by its methods, as shown in Figure 10. Figure10 B wrappingbcomponent Since component A is configurable and will be configured to call B instead of B. Since an exception E can be thrown, A must have a handler to efficiently handle it. However, component A is also a third party black box component that cannot be changed. The problem is now how to include an exception handler in component A without changing its code. The two alternatives below provide different solutions to this problem. 8.1 Alternative A In this alternative, component A handlers are placed in another class, an abnormal class, named in this example by ExceptionalA. The class ExceptionalA has the method EHandler, which is the handler for exception E raised by the component B. See Figure 11. For implementing the alternative defined above, we need an exception handling mechanism that supports the explicit separation of the normal activity from their exception handlers. In [Garcia at al, 1999] the authors present the specification and implementation of an exception mechanism implemented using the Java programming language by means of a metaobject protocol (MOP) named Guaraná [Oliva and Buzato, 1998]. The application components will be implemented in the base level while the meta-objects implement the specific responsibilities of the exception mechanism. When a normal class of the component signals an exception, it is intercepted by the meta-object protocol and the meta-objects will find an adequate exception handler in the abnormal class. The abnormal classes are hierarchically organized, allowing subclasses to inherit handlers from their superclasses and, consequently, permitting the reuse of abnormal code. The abnormal class hierarchy is orthogonal to the normal class hierarchy.

35 Component A Component B ExceptionalA EHandler() A a1 : B' m1() B' b:b m2 throws E() B m2() public void m1() { a1.m2()... } public void m2() throws E { b.m2()... } Figure 11 Reusing A and B by means of an exception mechanism based on meta-level approach The advantage of this alternative is to keep a clear and transparent separation of the normal activity of a component from its handlers instead of keep the normal and abnormal code amalgamated. This separation of concerns makes the components easy to understand, change, maintain and reuse. 8.2 Alternative B If the exception mechanism proposed in [Garcia at al, 1999] is not available, we could also reuse the third party components A and B using the Java exception handling mechanism. ThesolutionistocreateaclassA that is able to catch the exceptions that A cannot. A works like a proxy and the configurable component A will request services from A instead of B. A will then request services from B, handle the exceptions eventually raised and give the answers back to A as shown in Figure 12. This solutions also keeps the separation of normal activity of the component A, implemented by the class A, and its exceptional activity, implemented by the class A. However, the only disadvantage of this approach is the indirections to access services of component B. 9 CONCLUSIONS Java exception mechanism has two different exception types: checked and unchecked exceptions. Checked exceptions are descendents from the class Exception excluding the class RuntimeException and its descendents. Unchecked exceptions are the descendents from the class RuntimeException. The major difference between checked and unchecked exception is that the latter needs not be explicitly declared on the throws clause of a method and the compiler does not require its client to have handlers for those exceptions. So, if the method throws an unchecked exception the compiler does not force client methods to catch or declare the exception in a throws clause. In other words, the client programmers are not forced to put try-catch blocks every time they invoke a method that raises an unchecked exception. Although using unchecked exceptions Component A Component B A a1 : C m1() A b':b' m2() B' b:b m2 throws E() B m2() public void m1() { a1.m2()... } public void m2() { try { b'.m2() } catch (Exception E) {...} } public void m2() throws E { b.m2()... } Figure 12 Reusing A and B using the Java exception handling mechanism

36 is less costly, the lack of an effectively handler to handle an exception occurrence can result in less reliable component. This construction is a loophole in Java s exception mechanism as lazy programmers are tempted to make all exceptions unchecked. In this paper we presented some guidelines and tips on when and how to use Java checked and unchecked exceptions in order to construct dependable software components. In summary, all exceptions (checked or unchecked) raised by a method should be explicitly represented in its public interface and every exception must have a handler to deal with it. Do not throw an unchecked exception simply because you don't want to be bothered with specifying the exceptions your methods can throw. Otherwise, use unchecked exceptions if you are sure that your client has complete knowledge about all the contract constraints or internally on the component you are designing, i.e., not in the component public interface. You can also use unchecked exceptions if you are signaling implementation errors that you are sure that it will be identified by tests, although this is very tricky to get right. We also present some alternatives to reuse non-dependable components in a dependable system by means of a reflective exception mechanism and Java exception mechanism. Exception. Software Engineering Notes Nov 2000 [Meyer, 1997a] MEYER, Bertrand; "Object- Oriented Software Construction" 2nd Edition 1997 Prentice-Hall Inc. [Meyer, 1997b] MEYER, Bertrand; "Design by Contract: The Lessons of Ariane" in IEEE Computer January of 1997 (vol. 30, no. 2, pages ). [Ferreira at al., 2001] FERREIRA, Gisele, RUBIRA, Cecília, DE LEMOS, Rogério; Explicit Representation of Exception Handling of Dependable Component-Based System HASE 2001, October 2001 [Garcia at al, 1999] GARCIA, Alessandro, BEDER, Delano and RUBIRA, Cecilia; An Exception Handling Mechanism for Developing Dependable Object-Oriented Software based on Meta-level Approach. Proceedings of 10th IEEE Symposium on Software Reliability Engineering.1999 [Oliva and Buzato, 1998] OLIVA, Alexandre, BUZATO, Luiz Eduardo; Reflective Programming in C++ and Java, OOPSLA 98, Vancouver, Canadá, October 1998 [JavaTutorial] al/exceptions/runtime.html 10 REFERENCES [Avizienes, 1997] AVIZIENES, A;. "Toward Systematic Design of Fault-Tolerant Systems". Computer 30(4). April pp [Cristian, 1989] CRISTIAN, Flaviu; "Exception Handling," in Dependability of Resilient Computers (ed. T. Anderson), pp.68-97, Blackwell Scientific Publications, [Lee e Anderson, 1990].LEE, P. and ANDERSON; T. Fault-Tolerance: Principles and Practice Springer-Verlag 2nd Edition [delemos and Romanovsky 2000] DE LEMOS, Rogério.and ROMANOVSKY, Alexander; Exception Handling in the Software Lifecycle. Int. Journal of Computer Systems Science and Engineering 16(2). March pp [Martin and Murphy., 2000] MARTIN, P. ROBILLARD and MURPHY, Gail;. Designing Robust Java Programs with

37 A LDAP-BASED KEY AUTHENTICATION FRAMEWORK FOR ISAKMP Ricardo Encarnação Carraretto Departamento de Informática Universidade Federal do Espírito Santo Vitória ES José Gonçalves Pereira Filho Departamento de Informática Universidade Federal do Espírito Santo Vitória ES ABSTRACT Along with the new rising business opportunities brought over by the Internet, a great demand for security protocols have been noticed. Several applications (e.g. conferencing, video on demand, e-commerce) may also use these security protocols in order to deliver secret or restricted content. The Internet Engineering Task Force (IETF) has agreed to a group of protocols designed to bring real security to the IP-layer level, formally know as IPSEC. Within this suite, the Internet Security Association Key Management Protocol (ISAKMP) is the one responsible for the tasks regarding key management. Although it does not specify an authentication mechanism, it sure requires it to be strong. This study suggests an authentication framework based on LDAP directory services, which has proven to be the de facto directory standard in the Internet environment. 1 INTRODUCTION As the need for security services in the Internet started to grow, the commercial use of the internetwork became part of our lives. From a simple home-banking application up to complex B2B (business-to-business) systems, everyone needs or at least would expect security. The IETF IPSEC Working Group has defined a series of protocols to be used in the IP-layer that provides a good level 1 of security, implementing either some sort of authentication, secrecy or both. AH (Kent and Atkinson, 1998) and ESP (Kent and Atkinson, 1998), the security mechanisms provided by these protocols, rely on the establishment of a security association (SA) between two communicating parties that defines the valid rules for the current communication. This is handled by ISAKMP (Maughan et al.), a flexible framework designed to manage the SA establishment. It requires a strong authentication mechanism, but does not specify one. One example of such strong mechanism is public key cryptography, as exposed in the RSA and DSA algorithms (Stallings, 1995). This brings to our attention the need for digital certificates to bind the subject (e.g. a person or a host) to its public key, proving its authenticity. The LDAP (Whal et al, 1997) directory is an interesting way of storing those certificates and it is worldwide accepted as the de facto standard for directory services in the Internet (Johner et al, 1998). In addition, it has Java support through libraries provided by ISVs 2 (like iplanet (iplanet, 2001) the Netscape-Sun alliance). Therefore, the implementation of such framework would improve the usage of the IPSEC technology in environments that cannot afford a proprietary strong authentication solution. This paper is organized as follows. Section 2 provides some concepts about security and cryptography, including digital certificates. Section 3 provides an explanation about the LDAP directory and how it can be used to store digital certificates. Section 4 shows some issues related to the key management problem and details of ISAKMP that are relevant to this work. Section 5 presents our proposal of integration between the LDAP directory and ISAKMP. Finally, Section 6 draws conclusions and presents future research directions. 2 SECURITY CONCEPTS According to Garfinkel (Garfinkel and Spafford, 1996), A computer is secure if you can depend on it and its software to behave as you expect. This definition also applies to a networked environment, where we expect the whole system to work as it was planned to. Depending on the type of service being held, someone might need one or more of these requirements (Lethi, 1998): Availability: The peer system is able to perform its task and deliver the requested information in the intended way. Confidentiality: No one else can listen to (or understand) the conversation. Integrity: No one is undetectably able to delete from, change, or add to the information being transferred. Identification: The identity of the other party is the one claimed and it remains the same throughout the session. Authorization: Meaning whether or not the other party has the right to do something (e.g. access or provide the service). Non-repudiation: Assures that the events taking place during the session can, beyond any doubt, afterwards be proved to an impartial judge. 1 As good as the chosen cryptography algorithm. 2 Independent Software Vendors

38 2.1 Threats, Vulnerabilities and Attacks In the computer security literature, threat, vulnerability and attack are considered technical terms and can be defined as follows (National Research Council, 1999): Threat is an adversary that is motivated and capable of exploiting a vulnerability. Vulnerability is an error or weakness in the design, implementation or operation of a system. Attack is a mean of exploiting some vulnerability in a system. Therefore, if an attack causes the interruption of a service, it may be called an availability threat, also known as a denial of service. An eavesdropper or someone analyzing traffic in the communication channel are examples of passive attacks. An attacker who deceives the communicating parties in order to make them believe that they are communicating with each other, by intercepting, deleting or inserting messages during the session, is called a man-in-themiddle attack and states an example of an active attack (shown below from (Garfinkel and Spafford, 1997)). In a man-in-the-middle attack, an attacker intercepts all of the communications between two parties, making each think that is communicating with the other. User thinks he/she is talking to server. Man in the middle Appears to be the server to the user, and the user to the server User Figure 1 A typical man-in-the-middle attack. For example, A asks B s public key in order to send him/her a message. B sends it to A, but it is intercepted by C. C sends his/her public key to A (who thinks it is B s). Now, when A encrypts a message with the fake B s private key, C can intercept it, decrypt and send it to B using the previously captures B s public key. 2.2 Prevent and Protection Server Server thinks it is talking to user. It is practically impossible to counter all types of attacks aimed to a computer system, but a lot can be done to prevent its happening and protect your system. By the use of cryptography algorithms, one might bring the risk involved in being part of a connected world to an acceptable level. For example, a passive attack (like eavesdropping) can be successfully countered by encrypting the communication channel. On the other hand, cryptography by itself can be useless when protecting from a denial of service attack. It must be coupled with some kind of heuristics to effectively minimize these attacks (like the cookie mechanism implemented in the ISAKMP protocol). 2.3 Cryptography and Digital Certificates Cryptography, from the Greek kryptós (which means hidden) and graphos (which means writing), can be divided in two groups, according to the way the key(s) used to protected data is(are) generated and used: Symmetric (secret key) and Asymmetric (public and private key pair). In a practical sense, both methods are used along with each other, usually applying a public key cryptography scheme to deliver a secret key that will be used in the communication session. This happens due to the relatively faster speed secret key cryptosystems display when compared to public key ones (Stallings, 1998) Symmetric Key Cryptosystems The communicating parties share a secret key that is used to encrypt and decrypt the messages being sent. This key must be delivered by any secure means, either by breaking it and sending through different communicating channels (e.g. telephone, fax and ), or by encrypting it using a preshared master key (Stallings, 1995): Plaintext Encryption algorithm Shared key Ciphertext Figure 2 - Symmetric encryption Asymmetric Key Cryptosystems Decryption algorithm Each user generates its key pair (one public and one private), one used to encrypt and the other to decrypt the message (Stallings, 1995): B s public key B s private key User A Plaintext Encryption algorithm Ciphertext Decryption algorithm Figure 3 - Asymmetric encryption. Plaintext User B Plaintext Prior to sending a message to B, A encrypts it with B s public key. Now, only B (with its private key) is able to extract the message from the ciphertext. On the other hand, if B wants to send a

39 message and prove that it was written by him/her, all he/she needs is to encrypt it with his/her private key. Therefore, everyone who posesses B s public key can decrypt the message and read it, which proves its authenticity. This process is called digital signing Hash Functions and Message Digests Hash functions are mathematical functions that take some input (usually of indefinite length) and produce an output that is significantly shorter than the input (Garfinkel and Spafford, 1996). There are several properties that a hash function H must show in order to be considered useful for message authentication, which can be seen in (Stallings, 1995). The role of the hash function is to produce a message digest that can be used on authentication and integrity verification Digital Certificates A digital certificate is a document signed by a Trusted-Third-Party (TTP) that validates the information it contains. One example of such certificate is the X.509 certificate (Henshall and Shaw, 1990) proposed by International Standards Organization (ISO): Algorithm identifier Validity identifier Owner s public key Figure 4 X.509 certificate. By verifing the TTP signature, A and B can be sure that the public keys involved in the communication are trustworthy. In the following session we show how those certificates can be efficiently stored in a hierarchical and distributed structure, making it straightforward to be retrieved and used. 3 THE LDAP DIRECTORY Version Serial Number algorithm parameters Issuer not before not after Owner algorithm parameters public key Signature A directory is a collection of information optimized for reading, searching and browsing. The directory standard proposed by ISO and ITU-T in 1988 is the X.500 directory service. However, as it requires the whole stack of OSI protocols, it is inadequate to be used as is in the Internet context, drawing our attention to a very interesting alternative, the LDAP (Lightweight Directory Access Protocol) (Whal et al, 1997). It is used to provide access to a X.500 directory service without the overhead of the session/presentation of X.500 DAP (Directory Access Protocol) (Henshall and Shaw, 1990), running over TCP (or other transport protocol). The protocol also allows access to other directory services that follows the X.500 model, which is the case of OpenLDAP (OpenLDAP, 2001). 3.1 Directory Structure The LDAP information model is based on entries, which are collections of attributes that have a global and unique distinguished name (DN), used to refer to the entry unambiguously. The information is arranged in a hierarchical tree-like structure, as exemplified below: c=us root Figure 5 The directory structure. 3.2 Accessing the Information The directory allows search and update operations that can be used by an end user to interact with its database. The information is referenced by its relative distinguished name (RDN) appended by its former entries (which defines its distinguished name). Thus, considering figure 5, the RDN of Ricardo Carraretto would be cn = Ricardo Carraretto, and the DN (going up on the tree structure), cn = Ricardo Carraretto, ou = Depto. de Informática, o = UFES, c = BR (Wahl et al, 1997). When a LDAP client asks the server a question, it can reply with the answer and/or a pointer to where the client can get additional information (usually another LDAP server). These two types of answers defines the concepts of response and referral, respectivelly. 3.3 Digital Certificate Storage c=br st = Espírito Santo The Organization o=ufes ou = Depto. de Informática ou = Depto. de Eng. Civil Organizational Unit cn = Ricardo Carraretto Person The structure provided by LDAP and its commitment to the Internet environment makes it an interesting way of storing user information, including X.509 certificates, needed to enforce the ISAKMP authetication.

40 4 KEY MANAGEMENT AND ISAKMP Under the large scope of key management, one subject of particular interest is the use of public keys to distribute session secret keys. Several mechanisms have been proposed for the distribution of public keys (Stallings, 1995): Public announcement, publicly available directory, public key authority and public key certificates Public Announcement of Public Keys The public announcement of public keys is a very common way to spread a public key (e.g. by attaching it in every sent), but it s subject to forgery and it s not considered a safe way of key distribution Publicly Available Directory In this model, a greater level of security is reached, as the maintenance and distribution of the public directory is delegated to some trusted entity or organization. However, it still has some drawbacks. If someone succeeds in obtaining or computing the private key of the directory authority, he/she would be able to pass out fake keys and impersonate (or eavesdrop) on messages sent to any registered member Public Key Authority The following picture, from (Stallings, 1995), shows the role of a public key based authentication service, with its syntax explained in table 1: The only pre-requisite is that every user stores the public key of the Public Key Authority (PKA) locally and had obtained it in a secure way. This approach is secure enough, but the Public Key Authority poses as a bottleneck, since it must be accessed very often, to retrieve a particular public key. Also, the authentication takes place after too many round-trips Public Key Certificates This approach is based on digital certificates that will be used by the communicating parties to exchange keys without the need of contacting a public key authority. Each certificate (created by the Certification Authority CA) contains a public key, issuer and subject information and is given to the party with the corresponding private key. The certificate must be sent prior to communication and its authenticity can be validated by verifying the signature of the CA (printed inside it), as depicted bellow: Initiator A KU a Certificate Authority C A =E KRauth [Time 1 ID A KU a ] C B =E KRauth [Time 2 ID B KU b ] (1) C A (2) C B KU b Figure 7 Certificate based authentication. Responder B (1) Request Time 1 Public Key Authority The following requirements are placed on this scheme (Stallings, 1990): Initiator A (2) E KRauth [KU b Request Time 1 ] (3) E KUb [ID A N 1 ] (6) E KUa [N 1 N 2 ] (7) E KUb [N 2 ] (4) Request Time 2 (5) E KRauth [KU a Request Time 2 ] Figure 6 Public key based authentication. Responder B Table 1 A public key based authentication convention (Stallings, 1995). Symbol Description Time i A time-stamp, to provide replay protection. ID X An identification of X. KU i The public key of i. E KUi [m] A message m, encrypted with i s public key. E KRi [m] A message m, encrypted with i s private key. N i A nonce, to uniquely identify a transaction. The concatenate operator. Any participant can read a certificate to determine the name and public key of the certificate s owner. Any participant can verify that the certificate originated from the certificate authority and it is not counterfeit. Only the certificate authority can create and update certificates. From figure 7, each participant must apply to the certification authority (in person or by any other secure authenticated communication), supply her public key and request a certificate. In this context, the compromise of a participant s private key is similar to the loss of a credit card. The owner can cancel the certificate when she suspects her private key has been compromised but is at risk until all possible communicants are aware that the old certificate is obsolete. In this situation, the timestamp can be considered an expiration date, where a sufficiently old certificate is assumed to be expired.

41 In order to achieve the level of strong authentication required by ISAKMP using public key cryptography, the certificate-based model proves to be the best choice due to the fact it addresses some flaws observed in the other proposals and presents an inferior number of message round-trips required to retrieve the public key Session Secret Key Distribution Using the digital certificate model, two communicating parties can securely exchange their public keys and use them to exchange a session secret key: Initiator A Figure 8 Session secret key exchange using public key cryptography. In (1), the user A sends to B her identification (ID A ) and a nonce (N 1 an unique identifier of this transaction), encrypted with B s public key. B, in turn,sendtoathenoncen 1 and a new nonce N 2. Since only B could have been decrypted the message (1), the presence of N 1 in (2) assures A is talking to B. To complete the authentication challenge, A sends to B the previous nonce N 2, encrypted with B s public key (3). Following this step, A sends the session key Ks to B, encrypted with her private key (signing it) and B s public key (then, only B will be able to decrypt it) (4). 4.2 ISAKMP (1) E KUb [N 1 ID A ] (2) E KUa [N 1 N 2 ] (3) E KUb [N 2 ] (4) E KUbE KRa [K s ] The purpose of the ISAKMP protocol is to establish, negotiate, modify and delete security associations (SA). SAs contain all the required information for the execution of a large number of network security services, such as the IP layer services (like AH and ESP), transport or application layer services or self-protection of network traffic (Maughan et al, 1998). The protocol defines procedures and packet formats to accomplish its tasks. The ISAKMP does not mandates encryption algorithms, key exchange methods or authentication mechanisms (although, it requires a strong authentication method), characterizing an extensible and flexible architecture Security Associations Responder B going to make use of the security services to communicate securely (Maughan et al, 1998). This relationship is represented by a set of information that could be considered a contract between the entities, which is agreed upon and shared by all of them Negotiation Phases The ISAKMP offers two phases of negotiation. In the first phase, the ISAKMP servers agreed on how to protect further traffic between them, establishing an ISAKMP SA. This SA will be used later in the following specific security services negotiations (e.g. IPSEC AH or ESP), which categorizes the second phase negotiation (Maughan et al, 1998) Payload Types The protocol provides modular blocks for construction of ISAKMP messages, called payloads. An ISAKMP message has a fixed header (shown in figure 9) followed by a variable number of payloads. Detailed information about the ISAKMP payload types can be obtained in (Maughan et al, 1998) Initiator Cookie Responder Cookie Next Payload MjVer MnVer Exchange Type Flags Message ID Length Figure 9 ISAKMP header format. From figure 9, the Initiator and Responder Cookie, as well as the Exchange Type field, are of particular interest due to the fact they are closely related to the establishment of the ISAKMP SA Security Association Establishment In order to establish a SA, the Security Association, Proposal and Transform payloads are used. The message consists of one Security Association payload, followed by at least one Proposal payload and at least one Transformation payload associated to each Proposal payload. The role of the Proposal and Transformation payloads is to inform the responding entity the type of protection available so it can be selected prior the SA establishment. Concerning the ISAKMP SA establishment, the cookie fields are used to uniquely identify this special SA, while the Message ID and SPI fields are used to identify SAs for other security protocols, as show in table 2: A security association is a relationship between two or more entities that describe how these are

42 Table 2 ISAKMP header field usage during SA negotiation. Operation I-Cookie R-Cookie MsgID SPI Start ISAKMP SA Respond ISAKMP SA - - Init other SA Respond other SA Other (KE, ID, etc.) / - NA Security protocol (AH, ESP) NA NA NA NA Not applicable / KE Key Exchange / ID Identification Exchange Types The protocol provides five default exchange types (Maughan et al, 1998), carrying characteristics that enforce their use in particular situations: Base Exchange: Allow key exchange and authentication information to be transmitted together. It reduces the number of roundtrips at the expense of not providing identity protection. Identity Protection Exchange: Provides identity protection at the expense of two additional messages. Authentication Only Exchange: Allow only the transmission of authentication related information. The advantage of this exchange is its ability to authenticate without the expense of computing keys. Aggressive Exchange: Allow the exchange of all the relevant information to establish the SA to be transmitted at once, at the expense of not providing identity protection. Informational Exchange: It is a one-way transmittal of information, used to handle the security association management Internet IP Security DOI The ISAKMP requires a Domain of Interpretation (DOI) to be used in a particular context. It defines exchanges, payloads and processing guidelines that must be followed when applying the ISAKMP in the IPSEC environment. It also states guidelines when using digital certificates for authentication (Piper, 1998): Host systems implementing a certificate-based authentication scheme will need a mechanism for obtaining and managing a database of certificates. 4.3 Internet Key Exchange (IKE) The key exchange required by ISAKMP to establish the ISAKMP SA is handled by the IKE protocol (Harkins and Carrel, 1998), during its Phase 1. The main advantage of this two-phase approach is that once the ISAKMP SA has been established, several Phase 2 negotiations can be carried out by this ISAKMP SA, providing very fast re-keying when necessary. The following attributes can be negotiated by IKE as part of the ISAKMP security association: Encryption algorithm. Hash algorithm. Authentication method. Information about a group over which to Diffie-Hellman (Stallings, 1995). 5 THE LDAP INTEGRATION PROPOSAL The integration proposed by this paper appears in the form of a Security Policy Daemon (SPDaemon) that will be responsible of starting the ISAKMP negotiation and providing an interface mechanism to the LDAP directory when the parties authenticate each other. The following figure illustrates the idea: Host A Application (5) ISAKMP / IKE (4) (1) Internet TCP / UDP IP Link Layer Protocol SPDaemon Figure 10 The SPDaemon and its relationships. In (1), the application running at Host A requests a secure communication channel to the SPDaemon, in order to safely communicate to Host B. Local policy may be applied to check whether or not this access is eligible to the application. Then, the SPDaemon checks if the appropriated trusted B s public key is available. If it is not, the SPDaemon searches the LDAP directory to get B s key, being protected by the mechanisms provided by the Simple Authentication and Secure Layer (SASL) (Myers, 1997) on top of Transport Layer Security (TLS) (Dierks and Allen, 1999). Once the key is validated, (3) (2) Host B LDAP directory

43 it is added to the ISAKMP local database (3) 3. With the key in its right place, the ISAKMP daemon negotiates the secure channel (4) and the application at Host A can use it to communicate with Host B (5). 5.1 The Prototype The prototype itself it is not completely portable, as it is partly based on a Linux IPSEC implementation, but the SPDaemon can be extended to support other IPSEC implementations available on other platforms (respecting the pre-requisite of supporting RSA-based authentication using X.509 certificates). The following steps must be performed to enable the daemon functionality: 1) All users that intend to use the system must provide to the SPDaemon his/her RSA public key (used for authentication only). 2) The SPDaemon submits the key to the CA, who will sign and store it in the LDAP directory. 3) When user A wants to securely communicate with user B, the SPDaemon checks whether if this certificate has already been locally installed and if it is still valid. If not, a new one is fetched from the directory (if for some reason a new certificate is not available, the process raises an error message and ends the negotiation). 4) With the certificate in its right place, the secure channel is established by the FreeS/WAN IPSEC daemon. This model assumes that users are already registered in the LDAP directory in order to perform an authentication prior to the public key submission and storage (signed by the trusted Certificate Authority). At the time of this writing, the SPDaemon was being modeled and implemented, using an objectoriented approach based on Java 1.3 (Weltman and Dahbura, 2000). We soon expect to have a working prototype of the daemon running in our lab, applied to a multimedia study-case (video on demand). 5.2 Related Work This paper presents one of the proposed strong authentication mechanisms required by the ISAKMP protocol using digital certificates. In (Nikander and Viljanen, 1998) and (Hasu and Kortesniemi, 2000), a different approach regarding the certificate format and its storage is discussed using the Simple Public Key Infrastructure (SPKI) concept (Ellison, 1999). 3 We are currently using the IPSEC version implemented by the FreeS/WAN Project (Freeswan, 2001). One of the differences between the SPKI certificate and X.509s lies on the subject of the certificate. In SPKI, instead of an identification (X.509 approach), an authorization is presented. It also models a concept of delegated trust that is quite interesting, since it allows a user to generate a certificate on behalf of another user, without the need to contact the certification authority. Althout the SPKI model seems to be more flexible, the X.509 certificate approach, based on LDAP directory storage presents a more availabe solution, since it agreggates two strong Internet standards currently in use. In addition, the PKI (Public Key Infrastructure) of the IETF has already agreed to a suitable schema to store these certificates in the LDAP directory (Boyen et al, 1999). 6 CONCLUSIONS This study proposes a way of integration between the IKAMP strong authentication requirement and the facilities provided by the LDAP, basing its implementation in Java, using the Netscape LDAP SDK 4.1 (Weltman and Dahbura, 2000). One of the advantages of this approach is its foundation on an industry standard, like LDAP, a technology being made more accessible on a daily basis. Additionally, the model implementation provides a flexible, extensible and portable environment, which can be used directly through different hardware and software platforms, as long as it supports IPSEC and RSA-based authentication. Also, it is its basement on free software, largely available in the Internet, which allows an in-depth study about the authentication problem faced by the ISAKMP, without relying on any black-box implementation that could be purchased from security software vendors. This work is being carried out in our Multimedia Laboratory as a security support layer for multimedia projects. Future work would include the translation of certification paths that will allow a multi-ca hierarchy to be used. Another aspect of expansion is to extend the support for other types of certificates like Pretty Good Privacy (PGP) (PGP, 2000) and SPKI (Ellison, 1999). 7 REFERENCES BOYEN, S. et. al. Internet X.509 Public Key Infrastructure: LDAPv2 Schema. RFC DIERKS, T. and ALLEN, C. The TLS Protocol Version 1.0. RFC ELLISON, C. SPKI Requirements. RFC FREESWAN, The FreeS/WAN Project. Available on-line: [ ].

44 GARFINKEL, S. and SPAFFORD, G. Practical UNIX and Internet Security. 2 nd edition O Reilly & Associates, Inc., GARFINKEL, S. and SPAFFORD, G. Web Security & Commerce. O Reilly & Associates, Inc., HASU,T.andKORTESNIEMI,Y.Implementing an SPKI Certificate Repository within the DNS. In: Theory and Practice in Public Key Cryptography. Australia, HARKINS, D. and CARREL, D. The Internet Key Exchange (IKE). RFC HENSHALL, J. and SHAW, S. OSI Explained: End- To-End Computer Communication Standards. 2 nd edition. Ellis Horwood Ltd., IPLANET E-commerce Solutions. Available online: [ ]. JOHNER, H. et. al. Understanding LDAP. Texas: IBM Redbooks, KENT, S. and ATKINSON, R., IP Authentication Header. RFC KENT, S. and ATKINSON, R., IP Encapsulation Security Payload. RFC LETHI, I., SPKI-based Access Control Server. Helsinki: Helsinki University of Technology, Master thesis. MAUGHAN, D. et al. Internet Security Association and Key Management Protocol (ISAKMP). RFC MYERS, J. Simple Authentication and Security Layer (SASL). RFC NIKANDER, P. and VILJANEN, L. Storing and Retrieving Internet Certificates. In: 3 rd Nordic Workshop on Secure Computer Systems. Trondheim, Norway, NRC, National Research Council: Trust in Cyberspace. National Academy Press, OPENLDAP. OpenLDAP Group. Available on-line: [ ]. PGP, PGP Freeware 7.0: An Introduction to Cryptography. Network Associates, Inc., PIPER, D. The Internet IP Security Domain of Interpretation for ISAKMP. RFC STALLINGS, W. Cryptography and Network Security: principles and practice. 2 nd edition. Prentice Hall, STALLINGS, W. Network and Internetwork Security: principles and practice. Prentice Hall, WELTMAN, R. and DAHBURA, T. LDAP Programming with Java. Addison-Wesley Pub. Co., WHAL, M. et al. Lightweight Directory Access Protocol (v3). RFC WHAL, M. et al. Lightweight Directory Access Protocol (v3): UTF-8 String Representation of Distinguished Names. RFC

45 IMPLEMENTAÇÃO DO CIFRADOR RIJNDAEL EM UMA FPGA DE BAIXO CUSTO Anderson Cattelan Zigiotto Divisão de Engenharia Eletrônica Instituto Tecnológico de Aeronáutica São José dos Campos SP (12) Roberto d Amore Divisão de Engenharia Eletrônica Instituto Tecnológico de Aeronáutica São José dos Campos SP (12) Wagner Chiepa Cunha Divisão de Engenharia Eletrônica Instituto Tecnológico de Aeronáutica São José dos Campos SP (12) RESUMO O cifrador de bloco Rijndael foi recentemente escolhido como o novo Padrão para Criptografia Avançada AES. Neste trabalho, é apresentada uma implementação em hardware do algoritmo de cifragem, com chave de 128 bits. Para tal, foi utilizado um circuito integrado reconfigurável do tipo FPGA Field Programmable Gate Array. A arquitetura dedicada foi projetada para ser programada em um dispositivo de média densidade e baixo custo. ABSTRACT The block cipher Rijndael was recently selected as the new Advanced Encryption Standard - AES. In this work, we present a hardware implementation of the encryption algorithm, with key length of 128 bits, using a FPGA Field Programmable Gate Array. The architecture was designed to fit into a mid-density, low-cost device. 1 INTRODUÇÃO O processo de seleção de um novo padrão de criptografia avançada AES (NIST 2001, NIST website) a ser utilizado pelo governo norte-americano para a proteção de dados resultou na escolha do algoritmo Rijndael. Espera-se que em pouco tempo ele seja amplamente adotado pela iniciativa privada. Como os demais algoritmos de criptografia desenvolvidos recentemente, o Rijndael foi projetado tendo em vista uma implementação rápida em software, e especialmente em CPU s de 32 bits. No entanto as implementações em hardware ainda são de extrema importância quando é exigido alto desempenho ou quando é necessário remover da CPU a tarefa de criptografia, utilizando um coprocessador. Os circuitos programáveis do tipo FPGA se apresentam como excelentes candidatos para exercer estas funções, aliando a velocidade de uma arquitetura dedicada à flexibilidade, até pouco tempo somente encontrada no software. Esta flexibilidade, ou seja, a possibilidade da FPGA ser configurada para realizar uma função diferente, é extremamente importante na criptografia, pois o algoritmo utilizado pode ficar obsoleto ou mesmo ser quebrado. Ainda, existem protocolos de comunicação segura de dados, como SSL e IPSec, que prevêem a utilização de vários algoritmos diferentes. Neste caso, a FPGA pode ser reconfigurada a cada nova sessão para implementar o cifrador correspondente. Neste trabalho, propõe-se uma arquitetura dedicada para realizar a função de cifragem de acordo como padrão AES-Rijndael, com chave de 128 bits. O projeto da arquitetura foi feito tendo como tecnologia alvo uma FPGA de uso comum, média densidade e baixo custo, mais especificamente a FPGA Altera - FLEX10K20. O trabalho está dividido da seguinte maneira: na seção 2 é mostrado um resumo sobre criptografia, incluindo a descrição dos termos freqüentemente utilizados na área. Na seção 3 é descrito o algoritmo do cifrador Rijndael. A arquitetura proposta para a implementação deste algoritmo é descrita na seção 4. Em seguida, na seção 5, são mostrados os resultados obtidos e uma estimação do desempenho, levando às conclusões da seção 6. 2 CRIPTOGRAFIA A criptografia é a arte ou ciência de escrever mensagens em código, tornando-as seguras, ou seja, que somente o destinatário possa compreendê-las (Schneier, 1995). O processo de embaralhar a mensagem original, de forma a esconder seu conteúdo, é chamado de cifragem, do qual resulta a mensagem cifrada. O processo inverso é chamado decifragem. Um algoritmo criptográfico, ou cifrador, é a função matemática usada para a cifragem e decifragem. Geralmente este algoritmo é de conhecimento público, sendo que a segurança da comunicação é baseada em uma informação compartilhada entre o transmissor e o receptor da mensagem: a chave. Nos algoritmos simétricos, ou de chave secreta, a chave de cifragem é a mesma chave de decifragem, ou então uma é facilmente obtida a partir da outra. Nestes algoritmos, toda a segurança reside na chave, e o remetente e destinatário da mensagem devem concordar em um mesmo valor, de forma secreta. Já nos algoritmos assimétricos, ou de chave pública, as chaves de cifragem e decifragem são diferentes. A chave de decifragem (chave privada) não pode ser derivada da chave de cifragem (chave pública), ao menos não facilmente. Neste tipo de algoritmo, qualquer pessoa pode cifrar mensagens com a chave pública, mas somente o possuidor da chave privada correspondente pode decifrá-las. Os cifradores simétricos podem ainda ser divididos em dois grupos: os cifradores de fluxo, que atuam em um bit da mensagem por vez; e os cifradores de bloco, que trabalham com grupos de bits, ou blocos. Nesta última categoria se enquadra a

46 grande maioria dos cifradores utilizados atualmente, entre eles o Rijndael. O cifrador de bloco mais difundido é o DES Data Encryption Standard pois trata-se do padrão para criptografia de dados adotado pelo governo norte-americano em 1977 (NIST, 1993). O DES trabalha com blocos de 64 bits e chaves de 56 bits. Como a maioria dos cifradores de bloco, seu funcionamento consiste em uma transformação inicial nos 64 bits da mensagem original, seguida de uma outra transformação aplicada diversas vezes (16 no caso do DES), sendo que para cada um destes passos, ou rounds, é utilizada uma sub-chave, ou round key, derivada da chave principal. Por fim, é aplicada uma transformação final, resultando na mensagem cifrada. Ao longo do tempo, o algoritmo do DES foi exaustivamente analisado, resultando em diversos métodos para quebrar a sua segurança. Além disso, o maior problema do DES consiste no tamanho da chave, relativamente pequeno. Devido ao avanço na velocidade dos processadores, é possível realizar um ataque de força bruta em pouco tempo. Este tipo de ataque consiste em testar todas as 2 56 chaves até obter a correta. Uma solução encontrada foi utilizar o DES-Triplo, ou 3-DES, que consiste em cifrar a mensagem três vezes com o DES. Apesar de garantir uma ótima segurança, a cifragem com o 3-DES é muito lenta, o que levou o governo americano e a comunidade de criptoanálise a buscar um novo padrão de criptografia, que garantisse um nível de segurança igual ou maior que o 3-DES e que fosse mais rápido. Surgiu, assim, a idéia do AES Advanced Encryption Standard, que trabalharia com blocos de 128 bits e chaves de 128, 192 e 256 bits O processo de escolha do novo algoritmo para ser utilizado como padrão na proteção de dados do governo americano iniciou em Neste ano, o instituto responsável por padronização e tecnologia dos EUA, o NIST, solicitou a pesquisadores do mundo inteiro que estes que enviassem as suas propostas. No ano seguinte foram anunciados quinze algoritmos aceitos, e iniciou-se a etapa de avaliação, na qual a comunidade ligada à criptografia foi convidada a analisar os algoritmos candidatos. Foram realizadas três conferências. Na primeira foram apresentados os candidatos; na segunda, em 1999, foram selecionados cinco finalistas, seguindo critérios de segurança e velocidade em software; na terceira conferência, no ano seguinte, foi anunciado o algoritmo escolhido, Rijndael (Daemen e Rijmen, 1999). Esta escolha foi baseada no desempenho geral: segurança, velocidade em software e em hardware e flexibilidade. pelos mesmos autores (Daemen et al., 1997), podendo trabalhar com tamanhos de bloco ou de chave de 128, 192 e 256 bits. No entanto, o padrão AES especifica apenas o bloco de 128 bits. Todas as operações matemáticas são realizadas no campo finito GF(2 8 ), módulo um polinômio irredutível de grau 8: x + x + x + x + 1. O campo GF(2 8 ) pode ser entendido como um conjunto de polinômios em GF(2). Por exemplo, o byte { }, ou {57}, representa o polinômio x + x + x + x + 1. A adição corresponde a uma operação OU-exclusivo. A multiplicação é executada pelo produto entre polinômios, seguido de uma operação de módulo. Todas as transformações são realizadas no estado, que é um arranjo de bytes, com quatro linhas e Nb colunas; sendo Nb o tamanho do bloco em palavras de 32 bits (para o AES, Nb = 4). A primeira transformação é a adição da chave. Em seguida, as demais operações, que constituem um round, são repetidas várias vezes. O número de repetições do round, Nr, varia com o tamanho da chave, sendo igual a 10, 12 e 14 para chaves de 128, 192 e 256 bits, respectivamente. A figura 1 mostra o pseudocódigo do cifrador. Figura 1 Pseudocódigo do cifrador. Em cada round o estado sofre quatro transformações - SubBytes ( ), ShiftRows( ), MixColumns( ) e AddRoundKey( ). As sub-chaves para cada round, ou RoundKeys, são geradas a partir do processo de expansão e seleção de chaves. 3.1 Transformação SubBytes( ) Esta transformação é uma substituição não linear realizada independentemente em cada byte do estado, usando uma tabela de substituição, ou S-box. Essa tabela é inversível e está mostrada na figura 2. Como exemplo, o byte {53} é substituído por {ed}. 3 O ALGORITMO RIJNDAEL Este algoritmo foi proposto como candidato a AES por Joan Daemen e Vincent Rijmen, ambos belgas. Ele foi baseado no cifrador Square, criado

47 3.4 Transformação AddRoundKey( ) Figura 2 Tabela de substituição do algoritmo, mostrando o valor de substituição do byte xy em hexadecimal. 3.2 Transformação ShiftRows( ) Nesta transformação, os bytes das últimas três linhas do estado são deslocados ciclicamente para a esquerda de uma, duas e três posições para a segunda, terceira e quarta linhas, respectivamente. O efeito desta transformação no estado S é mostrado na figura 3. Figura 3 Funcionamento da função ShiftRows( ) 3.3 Transformação MixColumns( ) Esta função atua nas colunas do estado. Cada coluna, tratada como um polinômio de quatro termos em GF(2 8 ), é multiplicada pelo polinômio fixo 3 2 a ( x) = 03 x + 01 x 01 x + 02, módulo x { } { } { } { }. Esta operação pode ser representada pela multiplicação matricial:, s 0,c 02, s1,c = 01, s 2,c 01, s3,c 03 c < Nb s 01 s 03 s 02 s 0,c 1,c 2,c 3,c para 0. A figura 4 ilustra a operação da transformação MixColumns( ), que não é realizada no último round. Figura 4 A função MixColumns( ), que atua no estado coluna a coluna. Nesta operação, a sub-chave do round é adicionada ao estado. Esta adição é módulo 2, sendo realizada como uma simples operação OU-exclusivo (XOR). Cada sub-chave, w, possui Nb palavras de 32 bits, e é gerada através do processo chamado Key Schedule. A adição é realizada em cada coluna do estado. 3.5 Geração de Chaves Key Schedule A partir da chave de cifragem, são geradas todas as sub-chaves, ou round keys. O Key Schedule consiste na etapa de expansão Key Expansion e seleção Round Key Selection de chaves. Em ambas as etapas, o algoritmo trata as chaves como palavras de 32 bits, arranjadas em um vetor W. As primeiras posições do vetor são preenchidas com a própria chave de cifragem, e as demais são geradas através do processo de expansão, que pode ser descrito da seguinte maneira: a chave na posição i é obtida através da adição (OU-exclusivo) da chave na posição anterior com a chave Nk posições atrás, ou seja, wi = wi 1 wi Nk. Sendo Nk o tamanho, em palavras, da chave de cifragem. Por exemplo, para uma chave de 128 bits temos Nk = 4. Para palavras com índice múltiplo de Nk, a chave wi Nk passa antes por três transformações: um deslocamento cíclico para a esquerda de um byte, a aplicação da tabela de substituição nos quatro bytes da palavra e uma operação OU-exclusivo com uma constante, a qual depende do número do round. Uma vez preenchido o vetor W com a chave expandida, é feita a seleção das chaves a serem utilizadas em cada round, da seguinte maneira: no primeiro round são escolhidas as primeiras Nb palavras. Para o segundo round são tomadas as Nb palavras seguintes, e assim por diante. 4 ARQUITETURA PROPOSTA A partir da descrição do cifrador, foi feita uma análise dos recursos necessários a cada etapa da cifragem, bem como dos recursos disponíveis nas FPGA s existentes no mercado. As FPGA s, ou Field Programmable Gate Arrays, são circuitos integrados que podem ser configurados para realizar uma função específica. Estes dispositivos são constituídos de uma matriz de elementos lógicos programáveis, ligados por uma rede de conexões. Tanto a função realizada pelos elementos lógicos como as conexões entre eles são configuráveis. Na maioria dos dispositivos isto é feito através de tecnologia de anti-fusíveis ou elementos de memória SRAM. No caso das FPGA s da Altera, cada elemento lógico é constituído de um flip-flop conectado a uma tabela de quatro entradas. Assim, por exemplo, qualquer função lógica que dependa de até quatro

openmodeller A framework for species modeling Fapesp process: 04/11012-0 Partial Report #1 (April 2005 March 2006)

openmodeller A framework for species modeling Fapesp process: 04/11012-0 Partial Report #1 (April 2005 March 2006) openmodeller A framework for species modeling Fapesp process: 04/11012-0 Partial Report #1 (April 2005 March 2006) Introduction The openmodeller project goal is to develop a framework to facilitate the

Leia mais



Leia mais

Viability and Performance of High-Performance Computing in the Cloud

Viability and Performance of High-Performance Computing in the Cloud UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL INSTITUTO DE INFORMÁTICA PROGRAMA DE PÓS-GRADUAÇÃO EM COMPUTAÇÃO EDUARDO ROLOFF Viability and Performance of High-Performance Computing in the Cloud Thesis presented

Leia mais

An Ontology of the Physical Geography of Portugal

An Ontology of the Physical Geography of Portugal UNIVERSIDADE DE LISBOA FACULDADE DE CIÊNCIAS DEPARTAMENTO DE ENGENHARIA GEOGRÁFICA, GEOFÍSICA E ENERGIA An Ontology of the Physical Geography of Portugal Catarina da Conceição Gonçalves Rodrigues MESTRADO

Leia mais



Leia mais



Leia mais

Galp Energia s Customer Care Program - Opportunities for Improvement

Galp Energia s Customer Care Program - Opportunities for Improvement Galp Energia s Customer Care Program - Opportunities for Improvement Improving Customer Experience Through Customer Care Author: Duarte Almeida Costa Advisor: Rute Xavier Dissertation submitted in partial

Leia mais

Analytic report of distance learning in Brazil

Analytic report of distance learning in Brazil Analytic report of distance learning in Brazil Preface With Abed completing its 14th year of existence, it is with great pride it presents to the professional community of flexible and distance learning

Leia mais

Programa Para o Futuro

Programa Para o Futuro Programa Para o Futuro ICT Technical, Employability and Life Skills Training For a Better Tomorrow Creating Opportunities for Disadvantaged Youth to Transform Their Lives and Create New Futures Final Pilot

Leia mais

Definitional Mission to Evaluate ICT Projects in Brazil: Volume 1: Statewide Broadband Network for Acre

Definitional Mission to Evaluate ICT Projects in Brazil: Volume 1: Statewide Broadband Network for Acre Definitional Mission to Evaluate ICT Projects in Brazil: Volume 1: Statewide Broadband Network for Acre Final Report Submitted by September 2008 This report was funded by the U.S. Trade Development Agency

Leia mais

Latin American Research Network. Registration Form. The Quality of Education in LAC

Latin American Research Network. Registration Form. The Quality of Education in LAC Latin American Research Network Registration Form The Quality of Education in LAC 1. Name of institution: Instituto Futuro Brasil 2. Name of the participants: Project Director: Naercio Aquino Menezes Filho

Leia mais

VI Brazilian Symposium on Computer Games and Digital Entertainment November, 7-9, 2007 São Leopoldo RS - BRAZIL PROCEEDINGS

VI Brazilian Symposium on Computer Games and Digital Entertainment November, 7-9, 2007 São Leopoldo RS - BRAZIL PROCEEDINGS VI Brazilian Symposium on Computer Games and Digital Entertainment November, 7-9, 2007 São Leopoldo RS - BRAZIL PROCEEDINGS Published by Sociedade Brasileira de Computação - SBC Edited by Marcelo Walter

Leia mais


O PAPEL DO PROFISSIONAL DE SECRETARIO NA GESTÃO DE PROJETOS COMPLEXOS Revista de Gestão e Secretariado - GeSeC e-issn: 2178-9010 DOI: Organização: SINSESP Editor Científico: Cibele Barsalini Martins Avaliação: Double Blind Review pelo SEER/OJS Revisão: Gramatical, normativa

Leia mais

Álvaro Barbosa (Editor) ARTECH 2008. Proceedings of the 4 th International Conference on Digital Arts

Álvaro Barbosa (Editor) ARTECH 2008. Proceedings of the 4 th International Conference on Digital Arts Álvaro Barbosa (Editor) ARTECH 2008 Proceedings of the 4 th International Conference on Digital Arts Research Center for Science and Technology of the Arts (CITAR) School of Arts, Portuguese Catholic University

Leia mais



Leia mais

Simpósio de Informática. INForum 2013. Coletânea de comunicações do 5 o Simpósio de Informática

Simpósio de Informática. INForum 2013. Coletânea de comunicações do 5 o Simpósio de Informática Simpósio de Informática INForum 2013 Coletânea de comunicações do 5 o Simpósio de Informática 5 e 6 de Setembro de 2013 Universidade de Évora Portugal Editores Beatriz Sousa Santos, Universidade de Aveiro

Leia mais



Leia mais


HELP MENU IPEZ PROGRAM HELP MENU IPEZ PROGRAM CONTENTS I. Introduction... 2 II. Configuration... 2 II. 1. Language... 3 II. 2. Calliper serial port... 3 II. 3. Update management... 4 II.3.1. Automatic update... 4 II.3.2. Manual

Leia mais

Wireless-G Broadband Router

Wireless-G Broadband Router USER GUIDE Wireless-G Broadband Router Model: WRT54G2 (EU/UK) About This Guide About This Guide Icon Descriptions While reading through the User Guide you may see various icons that call attention to specific

Leia mais

Requirements Engineering@Brazil ER@BR2013 16 July 2013 Rio de Janeiro, Brazil

Requirements Engineering@Brazil ER@BR2013 16 July 2013 Rio de Janeiro, Brazil Requirements Engineering@Brazil ER@BR2013 16 July 2013 Rio de Janeiro, Brazil http://www.cin.ufpe.br/~erbr13 Jaelson Castro Fernanda Alencar Márcia Lucena Gilberto Cysneiros Filho (eds.) a CEUR Workshop

Leia mais

CAFOGROM Amazon Forest Growth Model - Data and system updates for 2011

CAFOGROM Amazon Forest Growth Model - Data and system updates for 2011 CAFOGROM Amazon Forest Growth Model - Data and system updates for 211 Denis Alder Consultant in Forest Biometrics November 211 A consultancy report prepared for the Forest Service of Brazil (SFB) and Food

Leia mais

Instituto Superior de Engenharia de Lisboa

Instituto Superior de Engenharia de Lisboa Instituto Superior de Engenharia de Lisboa Departamento de Engenharia Electrónica e Telecomunicações e de Computadores IRS Integração de Redes e Serviços (2011 2012) Docente: Pedro Ribeiro Grupo 1 Marco

Leia mais



Leia mais



Leia mais

6292/10228 - Instalação e Configuração do cliente Windows 7 Carga Horária: 3 dias / 5 noites

6292/10228 - Instalação e Configuração do cliente Windows 7 Carga Horária: 3 dias / 5 noites 6292/10228 - Instalação e Configuração do cliente Windows 7 Carga Horária: 3 dias / 5 noites Objetivos: Este curso de três dias conduzido por instrutor destina-se a profissionais de TI que estejam interessados

Leia mais

U.S. Agency for International Development

U.S. Agency for International Development THE INVESTOR ROADMAP OF ANGOLA Final Report u.s. Agency for International Development Prepared for: Prepared by: Sponsored by:, U.S. Agency for International Development Donaldo Hart, Consultant to The

Leia mais



Leia mais

G GSM/GPRS Alarm Communicator E Comunicador de Alarma GSM/GPRS Comunicador de Alarme GSM/GPRS

G GSM/GPRS Alarm Communicator E Comunicador de Alarma GSM/GPRS Comunicador de Alarme GSM/GPRS G GSM/GPRS Alarm Communicator E Comunicador de Alarma GSM/GPRS Comunicador de Alarme GSM/GPRS INTRODUCTION... 3 Features... 3 Technical Specifications... 3 Description... 5 IDENTIFICATION OF PARTS...

Leia mais

VIII Workshop em Clouds, Grids e Aplicações (WCGA)

VIII Workshop em Clouds, Grids e Aplicações (WCGA) XXVIII Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos 24 a 28 de maio de 2010 Gramado, RS VIII Workshop em Clouds, Grids e Aplicações (WCGA) Editora Sociedade Brasileira de Computação

Leia mais

:: International Human Rights: monitoring of the UN recommendations to Brazil::

:: International Human Rights: monitoring of the UN recommendations to Brazil:: International Human Rights Program International Monitoring Project :: International Human Rights: monitoring of the UN recommendations to Brazil:: 1st edition direitoshumanosinternacionais*gajo Direitos

Leia mais