Security Automation and Continuous Monitoring WG
Internet Engineering Task Force (IETF)                     D. Waltermire
Internet-Draft
Request for Comments: 7632                                          NIST
Intended status:
Category: Informational                                    D. Harrington
Expires: January 2, 2016
ISSN: 2070-1721                                       Effective Software
                                                            July 1,
                                                          September 2015

       Endpoint Security Posture Assessment - Assessment: Enterprise Use Cases
                      draft-ietf-sacm-use-cases-10

Abstract

   This memo documents a sampling of use cases for securely aggregating
   configuration and operational data and evaluating that data to
   determine an organization's security posture.  From these operational
   use cases, we can derive common functional capabilities and
   requirements to guide development of vendor-neutral, interoperable
   standards for aggregating and evaluating data relevant to security
   posture.

Status of This Memo

   This Internet-Draft document is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents not an Internet Standards Track specification; it is
   published for informational purposes.

   This document is a product of the Internet Engineering Task Force
   (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list  It represents the consensus of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid the IETF community.  It has
   received public review and has been approved for publication by the
   Internet Engineering Steering Group (IESG).  Not all documents
   approved by the IESG are a maximum candidate for any level of six months Internet
   Standard; see Section 2 of RFC 5741.

   Information about the current status of this document, any errata,
   and how to provide feedback on it may be updated, replaced, or obsoleted by other documents obtained at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on January 2, 2016.
   http://www.rfc-editor.org/info/rfc7632.

Copyright Notice

   Copyright (c) 2015 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   2
   2.  Endpoint Posture Assessment . . . . . . . . . . . . . . . . .   4   3
     2.1.  Use Cases . . . . . . . . . . . . . . . . . . . . . . . .   5
       2.1.1.  Define, Publish, Query Query, and Retrieve Security
               Automation Data . . . . . . . . . . . . . . . . . . .   5
       2.1.2.  Endpoint Identification and Assessment Planning . . .   9
       2.1.3.  Endpoint Posture Attribute Value Collection . . . . .  10
       2.1.4.  Posture Attribute Evaluation  . . . . . . . . . . . .  11
     2.2.  Usage Scenarios . . . . . . . . . . . . . . . . . . . . .  12
       2.2.1.  Definition and Publication of Automatable
               Configuration Checklists  . . . . . . . . . . . . . .  12
       2.2.2.  Automated Checklist Verification  . . . . . . . . . .  13
       2.2.3.  Detection of Posture Deviations . . . . . . . . . . .  16
       2.2.4.  Endpoint Information Analysis and Reporting . . . . .  17
       2.2.5.  Asynchronous Compliance/Vulnerability Assessment at
               Ice Station Zebra . . . . . . . . . . . . . . . . . .  18  17
       2.2.6.  Identification and Retrieval of Guidance  . . . . . .  20  19
       2.2.7.  Guidance Change Detection . . . . . . . . . . . . . .  21  20
   3.  IANA  Security Considerations . . . . . . . . . . . . . . . . . . . . .  21
   4.  Security Considerations  Informative References  . . . . . . . . . . . . . . . . . . .  21
   5.
   Acknowledgements  . . . . . . . . . . . . . . . . . . . . . .  22
   6.  Change Log  . . . . . . . . . . . . . . . . . . . . . . . . .  22
     6.1.  -08- to -09-  . . . . . . . . . . . . . . . . . . . . .
   Authors' Addresses  .  22
     6.2.  -07- to -08- . . . . . . . . . . . . . . . . . . . . . .  22
     6.3.  -06-

1.  Introduction

   This document describes the core set of use cases for endpoint
   posture assessment for enterprises.  It provides a discussion of
   these use cases and associated building-block capabilities.  The
   described use cases support:

   o  securely collecting and aggregating configuration and operational
      data, and

   o  evaluating that data to -07-  . . . . . . . . . . . . . . . . . . . . . .  23
     6.4.  -05- to -06-  . . . . . . . . . . . . . . . . . . . . . .  23
     6.5.  -04- to -05-  . . . . . . . . . . . . . . . . . . . . . .  23
     6.6.  -03- to -04-  . . . . . . . . . . . . . . . . . . . . . .  24
     6.7.  -02- to -03-  . . . . . . . . . . . . . . . . . . . . . .  24
     6.8.  -01- to -02-  . . . . . . . . . . . . . . . . . . . . . .  25
     6.9.  -00- to -01-  . . . . . . . . . . . . . . . . . . . . . .  25
     6.10. draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm-
           use-cases-00  . . . . . . . . . . . . . . . . . . . . . .  26
     6.11. waltermire -04- to -05- . . . . . . . . . . . . . . . . .  27
   7.  Informative References  . . . . . . . . . . . . . . . . . . .  28
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  28

1.  Introduction

   This document describes the core set of use cases for endpoint
   posture assessment for enterprises.  It provides a discussion of
   these use cases and associated building block capabilities.  The
   described use cases support:

   o  securely collecting and aggregating configuration and operational
      data, and

   o  evaluating that data to determine the security posture of
      individual endpoints.

   Additionally, this document describes a set of usage scenarios that
   provide examples for using the use cases and associated building
   blocks to address a variety of operational functions.

   These operational use cases and related usage scenarios cross many IT
   security domains.  The use cases enable the derivation of common:

   o  concepts that are expressed as building blocks in this document,

   o  characteristics to inform development of a requirements document

   o  information concepts to inform development of an information model
      document, and

   o  functional capabilities to inform development of an architecture
      document.

   Together these ideas will be used to guide development of vendor-
   neutral, interoperable standards for collecting, aggregating, and
   evaluating data relevant to security posture.

   Using this standard data, tools can analyze the state of endpoints,
   user activities and behaviour, and evaluate the security posture of
   an organization.  Common expression of information should enable
   interoperability between tools (whether customized, commercial, or
   freely available), and the ability to automate portions of security
   processes to gain efficiency, react to new threats in a timely
   manner, and free up security personnel to work on more advanced
   problems.

   The goal is to enable organizations to make informed decisions that
   support organizational objectives, to enforce policies for hardening
   systems, to prevent network misuse, to quantify business risk, and to
   collaborate with partners to identify and mitigate threats.

   It is expected that use cases for enterprises and for service
   providers will largely overlap.  When considering this overlap, there
   are additional complications for service providers, especially in
   handling information that crosses administrative domains.

   The output of endpoint posture assessment is expected to feed into
   additional processes, such as policy-based enforcement of acceptable
   state, verification and monitoring of security controls, and
   compliance to regulatory requirements.

2.  Endpoint Posture Assessment

   Endpoint posture assessment involves orchestrating and performing
   data collection and evaluating the posture of a given endpoint.
   Typically, endpoint posture information is gathered and then
   published to appropriate data repositories to make collected
   information available for further analysis supporting organizational
   security processes.

   Endpoint posture assessment typically includes:

   o  Collecting the attributes of a given endpoint;

   o  Making the attributes available for evaluation and action; and

   o  Verifying that the endpoint's posture is in compliance with
      enterprise standards and policy.

   As part of these activities, it is often necessary to identify and
   acquire any supporting security automation data that is needed to
   drive and feed data collection and evaluation processes.

   The following is a typical workflow scenario for assessing endpoint
   posture:

   1.  Some type of trigger initiates the workflow.  For example, an
       operator or an application might trigger the process with a
       request, or the endpoint might trigger the process using an
       event-driven notification.

   2.  An operator/application selects one or more target endpoints to
       be assessed.

   3.  An operator/application selects which policies are applicable to
       the targets.

   4.  For each target:

       A.  The application determines which (sets of) posture attributes
           need to be collected for evaluation.  Implementations should
           be able to support (possibly mixed) sets of standardized and
           proprietary attributes.

       B.  The application might retrieve previously collected
           information from a cache or data store, such as a data store
           populated by an asset management system.

       C.  The application might establish communication with the
           target, mutually authenticate identities and authorizations,
           and collect posture attributes from the target.

       D.  The application might establish communication with one or
           more intermediary/agents, mutually authenticate their
           identities and determine authorizations, and collect posture
           attributes about the target from the intermediary/agents.
           Such agents might be local or external.

       E.  The application communicates target identity and (sets of)
           collected attributes to an evaluator, possibly an external
           process or external system.

       F.  The evaluator compares the collected posture attributes with
           expected values as expressed in policies.

       G.  The evaluator reports the evaluation result for the requested
           assessment, in a standardized or proprietary format, such as
           a report, a log entry, a database entry, or a notification.

2.1.  Use Cases

   The following subsections detail specific use cases for assessment
   planning, data collection, analysis, and related operations
   pertaining to the publication and use of supporting data.  Each use
   case is defined by a short summary containing a simple problem
   statement, followed by a discussion of related concepts, and a
   listing of associated building blocks which represent the
   capabilities needed to support the use case.  These use cases and
   building blocks identify separate units of functionality that may be
   supported by different components of an architectural model.

2.1.1.  Define, Publish, Query and Retrieve Security Automation Data

   This use case describes the need for security automation data to be
   defined and published to one or more data stores, as well as queried
   and retrieved from these data stores for the explicit use of posture
   collection and evaluation.

   Security automation data is a general concept that refers to any data
   expression that may be generated and/or used as part of the process
   of collecting and evaluating endpoint posture.  Different types of
   security automation data will generally fall into one of three
   categories:

   Guidance:  Instructions and related metadata that guide the attribute
         collection and evaluation processes.  The purpose of this data
         is to allow implementations to be data-driven enabling their
         behavior to be customized without requiring changes to deployed
         software.

         This type of data tends to change in units of months and days.
         In cases where assessments are made more dynamic, it may be
         necessary to handle changes in the scope of hours or minutes.
         This data will typically be provided by large organizations,
         product vendors, and some 3rd-parties.  Thus, it will tend to
         be shared across large enterprises and customer communities.
         In some cases access may be controlled to specific
         authenticated users.  In other cases, the data may be provided
         broadly with little to no access control.

         This includes:

         *  Listings of attribute identifiers for which values may be
            collected and evaluated

         *  Lists of attributes that are to be collected along with
            metadata that includes: when to collect a set of attributes
            based on a defined interval or event, the duration of
            collection, and how to go about collecting a set of
            attributes.

         *  Guidance that specifies how old collected data can be to be
            used for evaluation.

         *  Policies that define how to target and perform the
            evaluation of a set of attributes for different kinds or
            groups of endpoints and the assets they are composed of.  In
            some cases it may be desirable to maintain hierarchies of
            policies as well.

         *  References to human-oriented data that provide technical,
            organizational, and/or policy context.  This might include
            references to: best practices documents, legal guidance and
            legislation, and instructional materials related to the
            automation data in question.

   Attribute Data:  Data collected through automated and manual
         mechanisms describing organizational and posture details
         pertaining to specific endpoints and the assets that they are
         composed of (e.g., hardware, software, accounts).  The purpose
         of this type of data is to characterize an endpoint (e.g.,
         endpoint type, organizationally expected function/role) and to
         provide actual and expected state data pertaining to one or
         more endpoints.  This data is used to determine what posture
         attributes to collect from which endpoints and to feed one or
         more evaluations.

         This type of data tends to change in units of days, minutes, a
         seconds with posture attribute values typically changing more
         frequently than endpoint characterizations.  This data tends to
         be organizationally and endpoint specific, with specific
         operational groups of endpoints tending to exhibit similar
         attribute profiles.  This data will generally not be shared
         outside an organizational boundary and will generally require
         authentication with specific access controls.

         This includes:

         *  Endpoint characterization data that describes the endpoint
            type, organizationally expected function/role, etc.

         *  Collected endpoint posture attribute values and related
            context including: time of collection, tools used for
            collection, etc.

         *  Organizationally defined expected posture attribute values
            targeted to specific evaluation guidance and endpoint
            characteristics.  This allows a common set of guidance to be
            parameterized for use with different groups of endpoints.

   Processing Artifacts:  Data that is generated by, and is specific to,
         an individual assessment process.  This data may be used as
         part of the interactions between architectural components to
         drive and coordinate collection and evaluation activities.  Its
         lifespan will be bounded by the lifespan of the assessment.  It
         may also be exchanged and stored to provide historic context
         around an assessment activity so that individual assessments
         can be grouped, evaluated, and reported in an enterprise
         context.

         This includes:

         *  The identified set of endpoints for which an assessment
            should be performed.

         *  The identified set of posture attributes that need to be
            collected from specific endpoints to perform an evaluation.

         *  The resulting data generated by an evaluation process
            including the context of what was assessed, what it was
            assessed against, what collected data was used, when it was
            collected, and when the evaluation was performed.

   The information model for security automation data must support a
   variety of different data types as described above, along with the
   associated metadata that is needed to support publication, query, and
   retrieval operations.  It is expected that multiple data models will
   be used to express specific data types requiring specialized or
   extensible determine the security automation data repositories.  The different
   temporal characteristics, access patterns, and access control
   dimensions posture of each data type may also require different protocols and
   data models to be supported furthering the potential requirement for
   specialized data repositories.  See [RFC3444] for
      individual endpoints.

   Additionally, this document describes a description and
   discussion set of distinctions between an information and data model.  It
   is likely usage scenarios that additional kinds of data will be identified through
   the process of defining requirements and an architectural model.
   Implementations supporting this building block will need to be
   extensible to accommodate the addition of new types of data, both
   proprietary or (preferably)
   provide examples for using a standard format.

   The building blocks of this the use case are:

   Data Definition:  Security automation data will guide and inform
         collection and evaluation processes.  This data may be designed
         by a variety of roles - application implementers may build
         security automation data into their applications;
         administrators may define guidance based on organizational
         policies; operators may define guidance and attribute data as
         needed for evaluation at runtime, cases and so on.  Data producers
         may choose to reuse data from existing stores of security
         automation data and/or may create new data.  Data producers may
         develop data based on available standardized or proprietary
         data models, such as those used for network management and/or
         host management.

   Data Publication:  The capability to enable data producers to publish
         data to a security automation data store for further use.
         Published data may be made publicly available or access may be
         based on an authorization decision using authenticated
         credentials.  As associated building
   blocks to address a result, the visibility variety of specific operational functions.

   These operational use cases and related usage scenarios cross many IT
   security
         automation data to an operator or application may be public,
         enterprise-scoped, private, or controlled within any other
         scope.

   Data Query:  An operator or application should be able domains.  The use cases enable the derivation of common:

   o  concepts that are expressed as building blocks in this document,

   o  characteristics to query a
         security automation data store using inform development of a set requirements document,

   o  information concepts to inform development of specified
         criteria.  The result an information model
      document, and

   o  functional capabilities to inform development of the query an architecture
      document.

   Together, these ideas will be a listing matching
         the query.  The query result listing may contain publication
         metadata (e.g., create date, modified date, publisher, etc.)
         and/or the full used to guide development of vendor-
   neutral, interoperable standards for collecting, aggregating, and
   evaluating data relevant to security posture.

   Using this standard data, a summary, snippet, or tools can analyze the location to
         retrieve state of endpoints as
   well as user activities and behaviour, and evaluate the data.

   Data Retrieval:  A user, operator, or application acquires one or
         more specific security automation data entries.  The location
   posture of an organization.  Common expression of the data may be known a priori, or may be determined based
         on decisions made using information from a previous query.

   Data Change Detection:  An operator should
   enable interoperability between tools (whether customized,
   commercial, or application needs freely available), and the ability to know when automate
   portions of security automation data they interested processes to gain efficiency, react to new
   threats in has been published
         to, updated in, or deleted from a timely manner, and free up security automation data
         store which they have been authorized personnel to access.

   These building blocks are used work on
   more advanced problems.

   The goal is to enable acquisition organizations to make informed decisions that
   support organizational objectives, to enforce policies for hardening
   systems, to prevent network misuse, to quantify business risk, and to
   collaborate with partners to identify and mitigate threats.

   It is expected that use cases for enterprises and for service
   providers will largely overlap.  When considering this overlap, there
   are additional complications for service providers, especially in
   handling information that crosses administrative domains.

   The output of various
   instances endpoint posture assessment is expected to feed into
   additional processes, such as policy-based enforcement of acceptable
   state, verification and monitoring of security automation data based on specific data models
   that are used controls, and
   compliance to drive assessment planning (see section 2.1.2), regulatory requirements.

2.  Endpoint Posture Assessment

   Endpoint posture attribute value collection (see section 2.1.3), assessment involves orchestrating and posture
   evaluation (see section 2.1.4).

2.1.2.  Endpoint Identification performing
   data collection and Assessment Planning

   This use case describes evaluating the process posture of discovering endpoints,
   understanding their composition, identifying the desired state to
   assess against, and calculating what a given endpoint.
   Typically, endpoint posture attributes information is gathered and then
   published to collect appropriate data repositories to
   enable evaluation.  This process may be a set of manual, automated,
   or hybrid steps that are performed make collected
   information available for each assessment.

   The building blocks of this use case are: further analysis supporting organizational
   security processes.

   Endpoint Discovery:  To determine posture assessment typically includes:

   o  collecting the current or historic presence attributes of
         endpoints in a given endpoint;

   o  making the environment that are attributes available for evaluation and action; and

   o  verifying that the endpoint's posture
         assessment.  Endpoints are identified is in support compliance with
      enterprise standards and policy.

   As part of discovery
         using information previously obtained or by using other
         collection mechanisms these activities, it is often necessary to gather identification identify and
         characterization data.  Previously obtained
   acquire any supporting security automation data that is needed to
   drive and feed data may originate
         from sources such as network authentication exchanges.

   Endpoint Characterization:  The act of acquiring, through automated collection or manual input, and organizing attributes
         associated with an endpoint (e.g., type, organizationally
         expected function/role, hardware/software versions).

   Identify Endpoint Targets:  Determine the candidate evaluation processes.

   The following is a typical workflow scenario for assessing endpoint
         target(s) against which to perform
   posture:

   1.  Some type of trigger initiates the assessment.  Depending
         on workflow.  For example, an
       operator or an application might trigger the assessment trigger, process with a single endpoint
       request, or multiple
         endpoints may be targeted based on characterized the endpoint
         attributes.  Guidance describing might trigger the assessment process using an
       event-driven notification.

   2.  An operator/application selects one or more target endpoints to
       be performed
         may contain instructions or references used assessed.

   3.  An operator/application selects which policies are applicable to determine
       the
         applicable assessment targets.  In this case the Data Query
         and/or Data Retrieval building blocks (see section 2.1.1) may
         be used

   4.  For each target:

       A.  The application determines which (sets of) posture attributes
           need to acquire this data.

   Endpoint Component Inventory:  To determine what applicable desired
         states be collected for evaluation.  Implementations should
           be assessed, it is first necessary able to acquire the
         inventory support (possibly mixed) sets of software, hardware, standardized and accounts associated with
         the targeted endpoint(s).  If the assessment of the endpoint is
         not dependent on the these details, then this capability is not
         required for use in performing the assessment.  This process
         can be treated
           proprietary attributes.

       B.  The application might retrieve previously collected
           information from a cache or data store, such as a collection use case for specific data store
           populated by an asset management system.

       C.  The application might establish communication with the
           target, mutually authenticate identities and authorizations,
           and collect posture
         attributes.  In this case attributes from the building blocks for
         Endpoint Posture Attribute Value Collection (see section 2.1.3)
         can target.

       D.  The application might establish communication with one or
           more intermediaries or agents, which may be used.

   Posture Attribute Identification:  Once local or
           external.  When establishing connections with an intermediary
           or agent, the endpoint targets and application can mutually authenticate their associated asset inventory is known, it is then necessary
         to calculate what
           identities and determine authorizations, and collect posture
           attributes are required to be about the target from the intermediaries or
           agents.

       E.  The application communicates target identity and (sets of)
           collected attributes to perform an evaluator, which is possibly an
           external process or external system.

       F.  The evaluator compares the desired evaluation.  When available,
         existing collected posture data is queried attributes with
           expected values as expressed in policies.

       G.  The evaluator reports the evaluation result for the requested
           assessment, in a standardized or proprietary format, such as
           a report, a log entry, a database entry, or a notification.

2.1.  Use Cases

   The following subsections detail specific use cases for suitability using the Data
         Query building block (see section 2.1.1).  Such posture assessment
   planning, data is
         suitable if it is complete collection, analysis, and current enough for use in related operations
   pertaining to the
         evaluation.  Any unsuitable posture data is identified for
         collection.

         If this publication and use of supporting data.  Each use
   case is driven defined by guidance, then the Data Query and/or Data
         Retrieval a short summary containing a simple problem
   statement, followed by a discussion of related concepts, and a
   listing of associated building blocks (see section 2.1.1) may be used to
         acquire this data.

   At this point that represent the set of posture attribute values capabilities
   needed to support the use for
   evaluation are known case.  These use cases and they can building blocks
   identify separate units of functionality that may be collected if necessary (see
   section 2.1.3).

2.1.3.  Endpoint Posture Attribute Value Collection supported by
   different components of an architectural model.

2.1.1.  Define, Publish, Query, and Retrieve Security Automation Data

   This use case describes the process of collecting a set of posture
   attribute values related need for security automation data to one or more endpoints.  This use case can be initiated by a variety of triggers including:

   1.  A posture change or significant event on the endpoint.

   2.  A network event (e.g., endpoint connects
   defined and published to a network/VPN,
       specific netflow is detected).

   3.  A scheduled one or ad hoc collection task.

   The building blocks of this use case are:

   Collection Guidance Acquisition:  If guidance is required to drive more data stores, as well as queried
   and retrieved from these data stores for the collection explicit use of posture attributes values, this capability
   collection and evaluation.

   Security automation data is
         used a general concept that refers to acquire this any data from one or more
   expression that may be generated and/or used as part of the process
   of collecting and evaluating endpoint posture.  Different types of
   security automation data stores.  Depending on the trigger, will generally fall into one of three
   categories:

   Guidance:  Instructions and related metadata that guide the specific guidance attribute
         collection and evaluation processes.  The purpose of this data
         is to allow implementations to acquire might be known.  If not, data-driven, thus enabling
         their behavior to be customized without requiring changes to
         deployed software.

         This type of data tends to change in units of months and days.
         In cases where assessments are made more dynamic, it may be
         necessary to
         determine the guidance to use based on handle changes in the component inventory scope of hours or other assessment criteria.  The Data Query and/or Data
         Retrieval building blocks (see section 2.1.1) may minutes.
         This data will typically be used provided by large organizations,
         product vendors, and some third parties.  Thus, it will tend to
         acquire this guidance.

   Posture Attribute Value Collection:  The accumulation of posture
         attribute values.  This may
         be based on collection guidance
         that is associated with the posture attributes.

   Once the posture attribute values are collected, they shared across large enterprises and customer communities.

         In some cases, access may be
   persisted for later use or they controlled to specific
         authenticated users.  In other cases, the data may be immediately used for posture
   evaluation.

2.1.4.  Posture Attribute Evaluation provided
         broadly with little to no access control.

         This use case represents the action includes:

         *  Listings of analyzing collected posture attribute identifiers for which values as part of an assessment.  The primary focus may be
            collected and evaluated.

         *  Lists of this
   use case is attributes that are to support evaluation of actual endpoint state against
   the expected state selected for the assessment.

   This use case can be initiated by collected along with
            metadata that includes: when to collect a variety set of triggers including:

   1.  A posture change or significant event attributes
            based on the endpoint.

   2.  A network event (e.g., endpoint connects to a network/VPN,
       specific netflow is detected).

   3.  A scheduled defined interval or ad hoc evaluation task.

   The building blocks event, the duration of this use case are:

   Collected Posture Change Detection:  An operator or application has
            collection, and how to go about collecting a
         mechanism set of
            attributes.

         *  Guidance that specifies how old collected data can be when
            used for evaluation.

         *  Policies that define how to detect target and perform the availability
            evaluation of new, a set of attributes for different kinds or changes to
         existing, posture attribute values.  The timeliness
            groups of
         detection may vary from immediate to on-demand.  Having the
         ability to filter what changes are detected will allow the
         operator to focus on endpoints and the changes that assets they are relevant composed of.  In
            some cases, it may be desirable to their use
         and will enable evaluation maintain hierarchies of
            policies as well.

         *  References to occur dynamically based on
         detected changes.

   Posture Attribute Value Query:  If previously collected posture
         attribute values are needed, the appropriate human-oriented data stores are
         queried that provide technical,
            organizational, and/or policy context.  This might include
            references to: best practices documents, legal guidance and
            legislation, and instructional materials related to retrieve them using the
            automation data in question.

   Attribute Data:  Data Query building block
         (see section 2.1.1).  If all collected through automated and manual
         mechanisms describing organizational and posture attribute values are
         provided directly for evaluation, then this capability may not
         be needed.

   Evaluation Guidance Acquisition:  If guidance is required details
         pertaining to drive specific endpoints and the evaluation assets that they are
         composed of (e.g., hardware, software, accounts).  The purpose
         of posture attributes values, this capability type of data is
         used to acquire this characterize an endpoint (e.g.,
         endpoint type, organizationally expected function/role) and to
         provide actual and expected state data from pertaining to one or
         more security automation endpoints.  This data stores.  Depending on the trigger, the specific guidance
         to acquire might be known.  If not, it may be necessary is used to determine the guidance what posture
         attributes to use based on the component inventory collect from which endpoints and to feed one or other assessment criteria.  The Data Query and/or Data
         Retrieval building blocks (see section 2.1.1) may be used
         more evaluations.

         This type of data tends to
         acquire this guidance.

   Posture Attribute Evaluation:  The comparison change in units of days, minutes,
         and seconds, with posture attribute values against their expected values as expressed in the
         specified guidance.  The result of this comparison is output as
         a set of posture evaluation results.  Such results include
         metadata required typically changing
         more frequently than endpoint characterizations.  This data
         tends to provide a level be organizationally and endpoint specific, with
         specific operational groups of assurance endpoints tending to exhibit
         similar attribute profiles.  Generally, this data will not be
         shared outside an organizational boundary and will require
         authentication with respect
         to specific access controls.

         This includes:

         *  Endpoint characterization data that describes the endpoint
            type, organizationally expected function/role, etc.

         *  Collected endpoint posture attribute data and, therefore, evaluation
         results.  Examples of such metadata include provenance values and or
         availability data.

   While the primary focus of this use case is around enabling the
   comparison related
            context including: time of collection, tools used for
            collection, etc.

         *  Organizationally defined expected vs. actual state, the same building blocks can
   support other analysis techniques that are applied to collected posture attribute data (e.g., trending, historic analysis).

   Completion of this process represents a complete assessment cycle as
   defined in Section 2.

2.2.  Usage Scenarios

   In this section, we describe values
            targeted to specific evaluation guidance and endpoint
            characteristics.  This allows a number of usage scenarios that utilize
   aspects common set of endpoint posture assessment.  These are examples guidance to be
            parameterized for use with different groups of common
   problems endpoints.

   Processing Artifacts:  Data that can is generated by, and is specific to,
         an individual assessment process.  This data may be solved with used as
         part of the building blocks defined above.

2.2.1.  Definition interactions between architectural components to
         drive and Publication of Automatable Configuration
        Checklists

   A vendor manufactures a number coordinate collection and evaluation activities.  Its
         lifespan will be bounded by the lifespan of specialized endpoint devices.  They the assessment.  It
         may also develop be exchanged and maintain stored to provide historic context
         around an operating system for these devices assessment activity so that
   enables end-user organizations to configure a number of security individual assessments
         can be grouped, evaluated, and
   operational settings.  As part reported in an enterprise
         context.

         This includes:

         *  The identified set of their customer support activities,
   they publish a number endpoints for which an assessment
            should be performed.

         *  The identified set of secure configuration guides posture attributes that provide
   minimum security guidelines for configuring their devices.

   Each guide they produce applies need to a be
            collected from specific model endpoints to perform an evaluation.

         *  The resulting data generated by an evaluation process
            including the context of device what was assessed, what it was
            assessed against, what collected data was used, when it was
            collected, and
   version of when the operating system and provides evaluation was performed.

   The information model for security automation data must support a number
   variety of specialized
   configurations depending on different data types as described above, along with the device's intended function and what
   add-on hardware modules
   associated metadata that is needed to support publication, query, and software licenses are installed on the
   device.  To enable their customers
   retrieval operations.  It is expected that multiple data models will
   be used to evaluate the express specific data types requiring specialized or
   extensible security posture automation data repositories.  The different
   temporal characteristics, access patterns, and access control
   dimensions of their devices each data type may also require different protocols and
   data models to ensure that all appropriate minimal security
   settings are enabled, they publish an automatable configuration
   checklists using be supported furthering the potential requirement for
   specialized data repositories.  See [RFC3444] for a popular description and
   discussion of distinctions between an information and data format model.  It
   is likely that defines what settings additional kinds of data will be identified through
   the process of defining requirements and an architectural model.
   Implementations supporting this building block will need to
   collect be
   extensible to accommodate the addition of new types of data, whether
   proprietary or (preferably) using a network management protocol standard format.

   The building blocks of this use case are:

   Data Definition:  Security automation data will guide and appropriate values
   for each setting.  They publish these checklists to inform
         collection and evaluation processes.  This data may be designed
         by a public variety of roles -- application implementers may build
         security automation data store that customers can query to retrieve applicable
   checklist(s) for into their deployed specialized endpoint devices.

   Automatable configuration checklist could also come from sources
   other than a device vendor, such applications;
         administrators may define guidance based on organizational
         policies; operators may define guidance and attribute data as industry groups or regulatory
   authorities, or enterprises could develop their own checklists.

   This usage scenario employs the following building blocks defined in
   Section 2.1.1 above:
         needed for evaluation at runtime; and so on.  Data Definition:  To allow guidance producers
         may choose to be defined using reuse data from existing stores of security
         automation data and/or may create new data.  Data producers may
         develop data based on available standardized or proprietary
         data models that will drive collection and
         evaluation. models, such as those used for network management and/or
         host management.

   Data Publication:  Providing a mechanism  The capability to enable data producers to publish created guidance
         data to a security automation data store.

   Data Query:  To locate and select existing guidance that store for further use.
         Published data may be made publicly available or access may be
         based on an authorization decision using authenticated
         credentials.  As a result, the visibility of specific security
         automation data to an operator or application may be
         reused. public,
         enterprise-scoped, private, or controlled within any other
         scope.

   Data Retrieval  To retrieve specific guidance from Query:  An operator or application should be able to query a
         security automation data store for editing.

   While each building block can be used in a manual fashion by using a human
   operator, it is also likely that these capabilities set of specified
         criteria.  The result of the query will be
   implemented together in some form of a guidance editor listing matching
         the query.  The query result listing may contain publication
         metadata (e.g., create date, modified date, publisher, etc.)
         and/or the full data, a summary, snippet, or generator
   application.

2.2.2.  Automated Checklist Verification the location to
         retrieve the data.

   Data Retrieval:  A financial services company operates a heterogeneous IT environment.
   In support of their risk management program, they utilize vendor
   provided automatable security configuration checklists for each
   operating system and user, operator, or application used within their IT environment.

   Multiple checklists are used from different vendors to insure
   adequate coverage of all IT assets.

   To identify what checklists are needed, they use acquires one or
         more specific security automation to gather
   an inventory of the software versions utilized by all IT assets in
   the enterprise.  This data gathering will involve querying existing
   data stores entries.  The location
         of previously collected endpoint software inventory
   posture data and actively collecting data from reachable endpoints as
   needed utilizing network and systems management protocols.
   Previously collected the data may be provided by periodic data
   collection, network connection-driven data collection, known a priori, or may be determined based
         on decisions made using information from a previous query.

   Data Change Detection:  An operator or ongoing
   event-driven monitoring of endpoint posture changes.

   Appropriate checklists application needs to know when
         security automation data they are queried, located and downloaded interested in has been
         published to, updated in, or deleted from the
   relevant guidance data stores.  The specific a security automation
         data stores queried and
   the specifics store that they have been authorized to access.

   These building blocks are used to enable acquisition of each query may be driven by various
   instances of security automation data including:

   o  collected hardware and software inventory data, and

   o  associated asset characterization based on specific data models
   that may indicate are used to drive assessment planning (see Section 2.1.2),
   posture attribute value collection (see Section 2.1.3), and posture
   evaluation (see Section 2.1.4).

2.1.2.  Endpoint Identification and Assessment Planning

   This use case describes the
      organizational defined functions process of each endpoint.

   Checklists discovering endpoints,
   understanding their composition, identifying the desired state to
   assess against, and calculating what posture attributes to collect to
   enable evaluation.  This process may be sourced from guidance data stores maintained by an
   application or OS vendor, an industry group, a regulatory authority, set of manual, automated,
   or directly by the enterprise. hybrid steps that are performed for each assessment.

   The retrieved guidance is cached locally to reduce the need to
   retrieve the data multiple times.

   Driven by building blocks of this use case are:

   Endpoint Discovery:  To determine the setting data provided current or historic presence of
         endpoints in the checklist, a combination environment that are available for posture
         assessment.  Endpoints are identified in support of existing configuration data stores discovery
         by using information previously obtained or using other
         collection mechanisms to gather identification and
         characterization data.  Previously obtained data may originate
         from sources such as network authentication exchanges.

   Endpoint Characterization:  The act of acquiring, through automated
         collection methods are
   used or manual input, and organizing attributes
         associated with an endpoint (e.g., type, organizationally
         expected function/role, hardware/software versions).

   Endpoint Target Identification:  Determine the candidate endpoint
         target(s) against which to gather perform the appropriate posture attributes from (or pertaining
   to) each endpoint.  Specific posture attribute values are gathered assessment.  Depending
         on the assessment trigger, a single endpoint or multiple
         endpoints may be targeted based on characterized endpoint
         attributes.  Guidance describing the defined enterprise function and software inventory of
   each endpoint.  The collection mechanisms used assessment to collect software
   inventory posture will be performed
         may contain instructions or references used again for to determine the
         applicable assessment targets.  In this purpose.  Once case, the data Data Query
         and/or Data Retrieval building blocks (see Section 2.1.1) may
         be used to acquire this data.

   Endpoint Component Inventory:  To determine what applicable desired
         states should be assessed, it is gathered, first necessary to acquire the actual state
         inventory of software, hardware, and accounts associated with
         the targeted endpoint(s).  If the assessment of the endpoint is evaluated against
         not dependent on the expected state
   criteria defined these details, then this capability is not
         required for use in each applicable checklist.

   A checklist can be assessed as a whole, or a specific subset of performing the
   checklist assessment.  This process
         can be assessed resulting in partial data collection and
   evaluation.

   The results of checklist evaluation are provided to appropriate
   operators and applications to drive additional business logic.
   Specific applications for checklist evaluation results are out-of-
   scope treated as a collection use case for current SACM efforts.  Irrespective of specific
   applications, posture
         attributes.  In this case, the availability, timeliness, and liveness of results
   is often of general concern.  Network latency and available bandwidth
   often create operational constraints that require trade-offs between
   these concerns and need to building blocks for
         Endpoint Posture Attribute Value Collection (see Section 2.1.3)
         can be considered.

   Uses of checklists used.

   Posture Attribute Identification:  Once the endpoint targets and
         their associated evaluation results may include, but
   are not limited to:

   o  Detecting endpoint asset inventory is known, it is then necessary
         to calculate what posture deviations as part of a change
      management program to:

      *  identify missing attributes are required patches,

      *  unauthorized changes to hardware and software inventory, and

      *  unauthorized changes be
         collected to configuration items.

   o  Determining compliance with organizational policies governing
      endpoint posture.

   o  Informing configuration management, patch management, and
      vulnerability mitigation and remediation decisions.

   o  Searching perform the desired evaluation.  When available,
         existing posture data is queried for current suitability using the Data
         Query building block (see Section 2.1.1).  Such posture data is
         suitable if it is complete and historic indicators of compromise.

   o  Detecting current and historic infection by malware and
      determining enough for use in the scope of infection within an enterprise.

   o  Detecting performance, attack and vulnerable conditions that
      warrant additional network diagnostics, monitoring, and analysis.

   o  Informing network access control decision making
         evaluation.  Any unsuitable posture data is identified for wired,
      wireless, or VPN connections.

   This usage scenario employs
         collection.

         If this is driven by guidance, then the following Data Query and/or Data
         Retrieval building blocks defined in blocks (see Section 2.1.1 above:

   Endpoint Discovery:  The purpose of discovery is 2.1.1) may be used to determine
         acquire this data.

   At this point, the
         type set of endpoint to be posture assessed.

   Identify Endpoint Targets:  To identify what potential endpoint
         targets the checklist should apply attribute values to based on organizational
         policies.

   Endpoint Component Inventory:  Collecting and consuming the software
         and hardware inventory use for the target endpoints.

   Posture Attribute Identification:  To determine what data needs to
   evaluation are known, and they can be collected to support evaluation, the checklist is evaluated
         against the component inventory and other endpoint metadata to
         determine if necessary (see
   Section 2.1.3).

2.1.3.  Endpoint Posture Attribute Value Collection

   This use case describes the process of collecting a set of posture
   attribute values that are needed. related to one or more endpoints.  This use case can
   be initiated by a variety of triggers including:

   1.  a posture change or significant event on the endpoint.

   2.  a network event (e.g., endpoint connects to a network/VPN,
       specific netflow [RFC3954] is detected).

   3.  a scheduled or ad hoc collection task.

   The building blocks of this use case are:

   Collection Guidance Acquisition:  Based on  If guidance is required to drive
         the identified collection of posture
         attributes, the application will query appropriate attributes values, this capability is
         used to acquire this data from one or more security automation
         data stores stores.  Depending on the trigger, the specific guidance
         to find acquire might be known.  If not, it may be necessary to
         determine the "applicable" collection guidance for each endpoint in question. to use based on the component inventory
         or other assessment criteria.  The Data Query and/or Data
         Retrieval building blocks (see Section 2.1.1) may be used to
         acquire this guidance.

   Posture Attribute Value Collection:  For each endpoint, the values
         for  The accumulation of posture
         attribute values.  This may be based on collection guidance
         that is associated with the required posture attributes are collected.

   Posture Attribute Value Query:  If previously collected attributes.

   Once the posture attribute values are used, collected, they are queried from the
         appropriate data stores may be
   persisted for the target endpoint(s).

   Evaluation Guidance Acquisition:  Any guidance that is needed to
         support evaluation is queried and retrieved. later use or they may be immediately used for posture
   evaluation.

2.1.4.  Posture Attribute Evaluation:  The resulting posture attribute values
         from previous collection processes are evaluated using Evaluation

   This use case represents the
         evaluation guidance to provide a set action of analyzing collected posture results.

2.2.3.  Detection of Posture Deviations

   Example corporation has established secure configuration baselines
   for each different type
   attribute values as part of endpoint within their enterprise
   including: network infrastructure, mobile, client, and server
   computing platforms.  These baselines define an approved list assessment.  The primary focus of
   hardware, software (i.e., operating system, applications, and
   patches), and associated required configurations.  When an endpoint
   connects to the network, the appropriate baseline configuration this
   use case is
   communicated to the support evaluation of actual endpoint based on its location in the network, state against
   the expected function of the device, and other asset management data.
   It is checked state selected for compliance with the baseline indicating any
   deviations to the device's operators.  Once the baseline has been
   established, assessment.

   This use case can be initiated by a variety of triggers including:

   1.  a posture change or significant event on the endpoint.

   2.  a network event (e.g., endpoint is monitored for any change events
   pertaining connects to the baseline on an ongoing basis.  When a change occurs network/VPN,
       specific netflow [RFC3954] is detected).

   3.  a scheduled or ad hoc evaluation task.

   The building blocks of this use case are:

   Collected Posture Change Detection:  An operator or application has a
         mechanism to posture defined in detect the baseline, updated availability of new posture information is
   exchanged, allowing operators attribute
         values or changes to be notified and/or automated action existing ones.  The timeliness of
         detection may vary from immediate to be taken.

   Like on-demand.  Having the Automated Checklist Verification usage scenario (see section
   2.2.2), this usage scenario supports assessment based on automatable
   checklists.  It differs from that scenario by monitoring for specific
   endpoint posture
         ability to filter what changes on an ongoing basis.  When are detected will allow the endpoint
   detects a posture change, an alert is generated identifying
         operator to focus on the
   specific changes in that are relevant to their use
         and will enable evaluation to occur dynamically based on
         detected changes.

   Posture Attribute Value Query:  If previously collected posture allowing assessment of
         attribute values are needed, the delta appropriate data stores are
         queried to be
   performed instead of a full assessment in the previous case.  This
   usage scenario employs retrieve them using the same Data Query building blocks as
   Automated Checklist Verification block
         (see section 2.2.2).  It differs
   slightly in how Section 2.1.1).  If all posture attribute values are
         provided directly for evaluation, then this capability may not
         be needed.

   Evaluation Guidance Acquisition:  If guidance is required to drive
         the evaluation of posture attributes values, this capability is
         used to acquire this data from one or more security automation
         data stores.  Depending on the trigger, the specific guidance
         to acquire might be known.  If not, it uses may be necessary to
         determine the following building blocks:

   Endpoint Component Inventory:  Additionally, changes guidance to use based on the hardware
         and software component inventory are monitored, with changes causing
         alerts to
         or other assessment criteria.  The Data Query and/or Data
         Retrieval building blocks (see Section 2.1.1) may be issued. used to
         acquire this guidance.

   Posture Attribute Value Collection:  After the initial assessment,
         posture attributes are monitored for changes.  If any Evaluation:  The comparison of the
         selected posture attribute
         values change, an alert is issued.

   Posture Attribute Value Query: against their expected values as expressed in the
         specified guidance.  The previous state result of this comparison is output as
         a set of posture
         attributes are tracked, allowing changes evaluation results.  Such results include
         metadata required to be detected.

   Posture Attribute Evaluation:  After the initial assessment, provide a
         partial evaluation is performed based on changes level of assurance with respect
         to specific the posture attributes.

   This usage scenario highlights attribute data and, therefore, evaluation
         results.  Examples of such metadata include provenance and or
         availability data.

   While the need primary focus of this use case is around enabling the
   comparison of expected vs. actual state, the same building blocks can
   support other analysis techniques that are applied to query a collected
   posture attribute data store to
   prepare (e.g., trending, historic analysis).

   Completion of this process represents a compliance report for complete assessment cycle as
   defined in Section 2.

2.2.  Usage Scenarios

   In this section, we describe a number of usage scenarios that utilize
   aspects of endpoint posture assessment.  These are examples of common
   problems that can be solved with the building blocks defined above.

2.2.1.  Definition and Publication of Automatable Configuration
        Checklists

   A vendor manufactures a specific number of specialized endpoint and devices.  They
   also the need develop and maintain an operating system for a change in endpoint state these devices that
   enables end-user organizations to trigger Collection and Evaluation.

2.2.4.  Endpoint Information Analysis configure a number of security and Reporting

   Freed from the drudgery
   operational settings.  As part of manual endpoint compliance monitoring, one their customer support activities,
   they publish a number of the security administrators at Example Corporation notices (not
   using SACM standards) secure configuration guides that five endpoints have been uploading lots of
   data provide
   minimum security guidelines for configuring their devices.

   Each guide they produce applies to a suspicious server specific model of device and
   version of the operating system and provides a number of specialized
   configurations depending on the Internet.  The administrator
   queries data stores for specific endpoint posture to see device's intended function and what
   add-on hardware modules and software is licenses are installed on those endpoints and finds that they all have
   a particular program installed.  She then queries the appropriate
   data stores
   device.  To enable their customers to see which other endpoints have that program installed.
   All these endpoints are monitored carefully (not using SACM
   standards), which allows evaluate the administrator security posture
   of their devices to detect ensure that the other
   endpoints all appropriate minimal security
   settings are also infected.

   This is just one example of the useful analysis enabled, they publish automatable configuration
   checklists using a popular data format that defines what settings to
   collect using a network management protocol and appropriate values
   for each setting.  They publish these checklists to a skilled
   analyst can do using public security
   automation data stores of store that customers can query to retrieve applicable
   checklist(s) for their deployed specialized endpoint posture. devices.

   Automatable configuration checklists could also come from sources
   other than a device vendor, such as industry groups or regulatory
   authorities, or enterprises could develop their own checklists.

   This usage scenario employs the following building blocks defined in
   Section 2.1.1 above:

   Posture Attribute Value Query:  Previously collected posture
         attribute values for the target endpoint(s) are queried from
         the appropriate data stores

   Data Definition:  To allow guidance to be defined using a standardized method.

   This usage scenario highlights the need to query
         or proprietary data models that will drive collection and
         evaluation.

   Data Publication:  Providing a repository for
   attributes mechanism to see which attributes certain endpoints have in common.

2.2.5.  Asynchronous Compliance/Vulnerability Assessment at Ice Station
        Zebra

   A university team receives a grant publish created guidance
         to do research at a government
   facility in the arctic.  The only network communications will be via
   an intermittent, low-speed, high-latency, high-cost satellite link.
   During their extended expedition, they will need to show continue
   compliance with the security policies of the university, the
   government, and the provider of the satellite network as well as keep
   current on vulnerability testing.  Interactive assessments are
   therefore not reliable, and since the researchers have very limited
   funding they need to minimize how much money they spend on network
   data.

   Prior to departure they register all equipment with an asset
   management system owned by the university, which will also initiate automation data store.

   Data Query:  To locate and track assessments.

   On a periodic basis -- either after select existing guidance that may be
         reused.

   Data Retrieval  To retrieve specific guidance from a maximum time delta or when the security
         automation data store has received a threshold level of new
   vulnerability definitions -- the university uses the information for editing.

   While each building block can be used in
   the asset management system to put together a collection request for
   all of the deployed assets manual fashion by a human
   operator, it is also likely that encompasses the minimal set these capabilities will be
   implemented together in some form of
   artifacts necessary to evaluate all three security policies as well
   as vulnerability testing. a guidance editor or generator
   application.

2.2.2.  Automated Checklist Verification

   A financial services company operates a heterogeneous IT environment.
   In the case of new critical vulnerabilities, this collection request
   consists only support of the artifacts necessary their risk management program, they utilize vendor-
   provided automatable security configuration checklists for those vulnerabilities each
   operating system and collection is only initiated for those assets that could
   potentially have a new vulnerability.

   (Optional) Asset artifacts are cached in a local CMDB.  When new
   vulnerabilities application used within their IT environment.
   Multiple checklists are reported to the security automation data store, a
   request used from different vendors to the live asset is only done if the artifacts in the CMDB
   are incomplete and/or not current enough.

   The collection request is queued for the next window ensure
   adequate coverage of connectivity.
   The deployed assets eventually receive the request, fulfill it, and
   queue the results for the next return opportunity.

   The collected artifacts eventually make it back all IT assets.

   To identify what checklists are needed, they use automation to gather
   an inventory of the university
   where software versions utilized by all IT assets in
   the level enterprise.  This data gathering will involve querying existing
   data stores of compliance previously collected endpoint software inventory
   posture data and vulnerability exposed is calculated actively collecting data from reachable endpoints as
   needed, utilizing network and asset characteristics are compared to what is in the asset systems management system for accuracy and completeness.

   Like the Automated Checklist Verification usage scenario (see section
   2.2.2), this usage scenario supports assessment based on checklists.
   It differs from that scenario in how guidance, protocols.
   Previously collected data may be provided by periodic data
   collection, network connection-driven data collection, or ongoing
   event-driven monitoring of endpoint posture
   attribute values, and evaluation results changes.

   Appropriate checklists are exchanged due to
   bandwidth limitations queried, located, and availability.  This usage scenario employs
   the same building blocks as Automated Checklist Verification (see
   section 2.2.2).  It differs slightly in how it uses downloaded from the following
   building blocks:

   Endpoint Component Inventory:  It is likely that
   relevant guidance data stores.  The specific data stores queried and
   the component
         inventory will not change.  If it does, this information will
         need to specifics of each query may be batched driven by data including:

   o  collected hardware and transmitted during the next
         communication window.

   Collection Guidance Acquisition:  Due to intermittent communication
         windows software inventory data, and bandwidth constraints, changes to collection

   o  associated asset characterization data that may indicate the
      organizationally defined functions of each endpoint.

   Checklists may be sourced from guidance data stores maintained by an
   application or OS vendor, an industry group, a regulatory authority,
   or directly by the enterprise.

   The retrieved guidance will need is cached locally to batched and transmitted during reduce the next
         communication window.  Guidance will need to be cached locally
   retrieve the data multiple times.

   Driven by the setting data provided in the checklist, a combination
   of existing configuration data stores and data collection methods are
   used to avoid gather the need for remote communications.

   Posture Attribute Value Collection:  The specific appropriate posture attributes from (or pertaining
   to) each endpoint.  Specific posture attribute values to be collected are identified remotely and batched for
         collection during gathered
   based on the next communication window.  If a delay is
         introduced for defined enterprise function and software inventory of
   each endpoint.  The collection mechanisms used to complete, results will need to be
         batched and transmitted.

   Posture Attribute Value Query:  Previously collected collect software
   inventory posture
         attribute values will be stored used again for this purpose.  Once the data
   is gathered, the actual state is evaluated against the expected state
   criteria defined in each applicable checklist.

   A checklist can be assessed as a remote data store for use
         at whole, or a specific subset of the university

   Evaluation Guidance Acquisition:  Due
   checklist can be assessed resulting in partial data collection and
   evaluation.

   The results of checklist evaluation are provided to intermittent communication
         windows appropriate
   operators and bandwidth constraints, changes applications to drive additional business logic.
   Specific applications for checklist evaluation
         guidance will need to batched results are out of
   scope for current SACM (Security Automation and Continuous
   Monitoring) efforts.  Irrespective of specific applications, the
   availability, timeliness, and liveness of results are often of
   general concern.  Network latency and available bandwidth often
   create operational constraints that require trade-offs between these
   concerns and transmitted during the next
         communication window.  Guidance will need to be cached locally
         to avoid the need for remote communications.

   Posture Attribute Evaluation:  Due to the caching considered.

   Uses of posture
         attribute values checklists and associated evaluation guidance, evaluation results may be
         performed at both the university campus as well include, but
   are not limited to:

   o  Detecting endpoint posture deviations as part of a change
      management program to identify:

      *  missing required patches,
      *  unauthorized changes to hardware and software inventory, and

      *  unauthorized changes to configuration items.

   o  Determining compliance with organizational policies governing
      endpoint posture.

   o  Informing configuration management, patch management, and
      vulnerability mitigation and remediation decisions.

   o  Searching for current and historic indicators of compromise.

   o  Detecting current and historic infection by malware and
      determining the
         satellite site. scope of infection within an enterprise.

   o  Detecting performance, attack, and vulnerable conditions that
      warrant additional network diagnostics, monitoring, and analysis.

   o  Informing network access control decision-making for wired,
      wireless, or VPN connections.

   This usage scenario highlights employs the need following building blocks defined in
   Section 2.1.1 above:

   Endpoint Discovery:  The purpose of discovery is to support low-bandwidth,
   intermittent, or high-latency links.

2.2.6.  Identification and Retrieval determine the
         type of Guidance

   In preparation for performing an assessment, an operator or
   application will need endpoint to be posture assessed.

   Endpoint Target Identification:  To identify one or more security automation
   data stores that contain what potential endpoint
         targets the guidance entries necessary checklist should apply to perform
   data collection based on organizational
         policies.

   Endpoint Component Inventory:  Collecting and evaluation tasks.  The location of a given
   guidance entry will either be known a priori or known security
   automation consuming the software
         and hardware inventory for the target endpoints.

   Posture Attribute Identification:  To determine what data stores will need needs to be queried
         collected to retrieve applicable
   guidance.

   To query guidance it will be necessary support evaluation, the checklist is evaluated
         against the component inventory and other endpoint metadata to define a
         determine the set of search
   criteria.  This criteria will often utilize a logical combination of
   publication metadata (e.g.  publishing identity, create time,
   modification time) and guidance data-specific criteria elements.
   Once posture attribute values that are needed.

   Collection Guidance Acquisition:  Based on the criteria is defined, one or more identified posture
         attributes, the application will query appropriate security
         automation data stores will need to be queried generating a result set.  Depending on
   how find the results "applicable" collection
         guidance for each endpoint in question.

   Posture Attribute Value Collection:  For each endpoint, the values
         for the required posture attributes are collected.

   Posture Attribute Value Query:  If previously collected posture
         attribute values are used, it may be desirable to return they are queried from the matching
   guidance directly, a snippet of
         appropriate data stores for the target endpoint(s).

   Evaluation Guidance Acquisition:  Any guidance matching that is needed to
         support evaluation is queried and retrieved.

   Posture Attribute Evaluation:  The resulting posture attribute values
         from previous collection processes are evaluated using the query, or
         evaluation guidance to provide a
   resolvable location set of posture results.

2.2.3.  Detection of Posture Deviations

   Example Corporation has established secure configuration baselines
   for each different type of endpoint within their enterprise
   including: network infrastructure, mobile, client, and server
   computing platforms.  These baselines define an approved list of
   hardware, software (i.e., operating system, applications, and
   patches), and associated required configurations.  When an endpoint
   connects to retrieve the data at a later time.  The
   guidance matching the query will be restricted based network, the authorized
   level of access allowed appropriate baseline configuration is
   communicated to the requester.

   If the endpoint based on its location of guidance is identified in the query result set, network,
   the guidance will be retrieved when needed using one or more data
   retrieval requests.  A variation on this approach would be to
   maintain a local cache expected function of previously retrieved the device, and other asset management data.  In this case,
   only guidance that
   It is determined to be stale by some measure will be
   retrieved from the remote data store.

   Alternately, guidance can be discovered by iterating over data
   published checked for compliance with a given context within a security automation data
   store.  Specific guidance can be selected and retrieved as needed.

   This usage scenario employs the following building blocks defined in
   Section 2.1.1 above:

   Data Query:  Enables an operator or application baseline, and any deviations
   are indicated to query one or more
         security automation data stores the device's operators.  Once the baseline has been
   established, the endpoint is monitored for guidance using a set of
         specified criteria.

   Data Retrieval:  If data locations are returned in any change events
   pertaining to the query result
         set, then specific guidance entries can be retrieved and
         possibly cached locally.

2.2.7.  Guidance Change Detection

   An operator or application may need baseline on an ongoing basis.  When a change occurs
   to identify new, updated, or
   deleted guidance posture defined in a security automation data store for which they
   have been authorized the baseline, updated posture information is
   exchanged, allowing operators to access.  This may be achieved by querying or
   iterating over guidance in a security automation data store, or
   through a notification mechanism that alerts notified and/or automated action
   to be taken.

   Like the Automated Checklist Verification usage scenario (see
   Section 2.2.2), this usage scenario supports assessment based on
   automatable checklists.  It differs from that scenario by monitoring
   for specific endpoint posture changes made to on an ongoing basis.  When the
   endpoint detects a
   security automation data store.

   Once guidance posture change, an alert is generated identifying
   the specific changes have been determined, data collection and
   evaluation activities may in posture, thus allowing assessment of the
   delta to be triggered. performed instead of a full assessment as in the previous
   case.  This usage scenario employs the following same building blocks defined as
   Automated Checklist Verification (see section 2.2.2).  It differs
   slightly in
   Section 2.1.1 above:

   Data Change Detection:  Allows an operator or application how it uses the following building blocks:

   Endpoint Component Inventory:  Additionally, changes to identify
         guidance the hardware
         and software inventory are monitored, with changes in a security automation data store which they
         have been authorized causing
         alerts to access.

   Data Retrieval:  If data locations be issued.

   Posture Attribute Value Collection:  After the initial assessment,
         posture attributes are provided by monitored for changes.  If any of the change
         detection mechanism, then specific guidance entries can
         selected posture attribute values change, an alert is issued.

   Posture Attribute Value Query:  The previous state of posture
         attributes are tracked, allowing changes to be
         retrieved and possibly cached locally.

3.  IANA Considerations

   This memo includes no request detected.

   Posture Attribute Evaluation:  After the initial assessment, a
         partial evaluation is performed based on changes to IANA.

4.  Security Considerations specific
         posture attributes.

   This memo documents, for informational purposes, use cases usage scenario highlights the need to query a data store to
   prepare a compliance report for
   security automation.  Specific security a specific endpoint and privacy considerations
   will be provided also the need
   for a change in related documents (e.g., requirements,
   architecture, information model, data model, protocol) as appropriate endpoint state to trigger Collection and Evaluation.

2.2.4.  Endpoint Information Analysis and Reporting

   Freed from the drudgery of manual endpoint compliance monitoring, one
   of the function described in each related document.

   One consideration for security automation is administrators at Example Corporation notices (not
   using SACM standards) that five endpoints have been uploading lots of
   data to a malicious actor
   could use suspicious server on the security automation infrastructure and related
   collected Internet.  The administrator
   queries data stores for specific endpoint posture to gain access to an item of interest.  This may
   include personal data, private keys, see what
   software is installed on those endpoints and configuration state finds that can be used to inform an attack against they all have
   a particular program installed.  She then queries the network and
   endpoints, and appropriate
   data stores to see which other sensitive information.  It is important endpoints have that
   security and privacy considerations in program installed.
   All these endpoints are monitored carefully (not using SACM
   standards), which allows the related documents identify
   methods administrator to both identify and prevent such activity.

   For consideration detect that the other
   endpoints are means for protecting also infected.

   This is just one example of the useful analysis that a skilled
   analyst can do using data stores of endpoint posture.

   This usage scenario employs the communications as well
   as following building blocks defined in
   Section 2.1.1 above:

   Posture Attribute Value Query:  Previously collected posture
         attribute values for the systems that store target endpoint(s) are queried from
         the information.  For communications
   between appropriate data stores using a standardized method.

   This usage scenario highlights the varying SACM components there should be considerations need to query a repository for protecting
   attributes to see which attributes certain endpoints have in common.

2.2.5.  Asynchronous Compliance/Vulnerability Assessment at Ice Station
        Zebra

   A university team receives a grant to do research at a government
   facility in the confidentiality, data integrity and peer entity
   authentication.  For exchanged information, there should Arctic.  The only network communications will be a means via
   an intermittent, low-speed, high-latency, high-cost satellite link.
   During their extended expedition, they will need to authenticate show continue
   compliance with the origin security policies of the information.  This is important
   where tracking university, the provenance of data is needed.  Also, for any
   systems that store information that could be used for unauthorized or
   malicious purposes, methods to identify and protect against
   unauthorized usage, inappropriate usage,
   government, and denial the provider of service the satellite network, as well as
   keep current on vulnerability testing.  Interactive assessments are
   therefore not reliable, and since the researchers have very limited
   funding, they need to be considered.

5.  Acknowledgements

   Adam Montville edited early versions of this draft.

   Kathleen Moriarty, minimize how much money they spend on network
   data.

   Prior to departure, they register all equipment with an asset
   management system owned by the university, which will also initiate
   and Stephen Hanna contributed text describing track assessments.

   On a periodic basis -- either after a maximum time delta or when the
   scope
   security automation data store has received a threshold level of new
   vulnerability definitions -- the document.

   Gunnar Engelbach, Steve Hanna, Chris Inacio, Kent Landfield, Lisa
   Lorenzin, Adam Montville, Kathleen Moriarty, Nancy Cam-Winget, and
   Aron Woland provided use cases text for various revisions of this
   draft.

6.  Change Log

6.1.  -08- university uses the information in
   the asset management system to -09-

   Fixed put together a number collection request for
   all of gramatical nits throughout the draft identified by deployed assets that encompasses the SECDIR review.

   Added additional text minimal set of
   artifacts necessary to the evaluate all three security considerations about malicious
   actors.

6.2.  -07- to -08-

   Reworked long sentences throughout policies as well
   as vulnerability testing.

   In the document by shortening or
   using bulleted lists.

   Re-ordered case of new critical vulnerabilities, this collection request
   consists only of the artifacts necessary for those vulnerabilities,
   and condensed text collection is only initiated for those assets that could
   potentially have a new vulnerability.

   (Optional) Asset artifacts are cached in the "Automated Checklist
   Verification" sub-section a local configuration
   management database (CMDB).  When new vulnerabilities are reported to improve
   the conceptual presentation and security automation data store, a request to clarify longer sentences.

   Clarified that the "Posture Attribute Value Query" building block
   represents a standardized interface live asset is
   only done if the artifacts in the context CMDB are incomplete and/or not
   current enough.

   The collection request is queued for the next window of SACM.

   Removed connectivity.
   The deployed assets eventually receive the "others" sub-section within request, fulfill it, and
   queue the "usage scenarios"
   section.

   Updated results for the "Security Considerations" section next return opportunity.

   The collected artifacts eventually make it back to identify that actual
   SACM security considerations will be discussed in the appropriate
   related documents.

6.3.  -06- to -07-

   A number of edits were made to section 2 to resolve open questions in university
   where the draft based on meeting level of compliance and mailing list discussions.

   Section 2.1.5 was merged into section 2.1.4.

6.4.  -05- to -06-

   Updated the "Introduction" section vulnerability exposed is calculated
   and asset characteristics are compared to better reflect what is in the use case,
   building block, asset
   management system for accuracy and completeness.

   Like the Automated Checklist Verification usage scenario structure changes (see section
   2.2.2), this usage scenario supports assessment based on checklists.
   It differs from previous
   revisions.

   Updated most uses of the terms "content" that scenario in how guidance, collected posture
   attribute values, and "content repository" evaluation results are exchanged due to
   use "guidance" and "security automation data store" respectively.

   In section 2.1.1, added a discussion of different data types
   bandwidth limitations and
   renamed "content" to "data" in availability.  This usage scenario employs
   the same building block names.

   In blocks as Automated Checklist Verification (see
   section 2.1.2, separated out 2.2.2).  It differs slightly in how it uses the following
   building block concepts of
   "Endpoint Discovery" and "Endpoint Characterization" based on mailing
   list discussions.

   Addressed some open questions throughout blocks:

   Endpoint Component Inventory:  It is likely that the draft based on consensus
   from mailing list discussions component
         inventory will not change.  If it does, this information will
         need to be batched and transmitted during the two virtual interim meetings.

   Changed many section/sub-section names to better reflect their
   content.

6.5.  -04- next
         communication window.

   Collection Guidance Acquisition:  Due to -05-

   Changes in this revision are focused on section 2 intermittent communication
         windows and the subsequent
   subsections:

   o  Moved existing use cases bandwidth constraints, changes to a subsection titled "Usage Scenarios".

   o  Added a new subsection titled "Use Cases" collection
         guidance will need to describe the common
      use cases batched and building blocks used transmitted during the next
         communication window.  Guidance will need to address be cached locally
         to avoid the "Usage
      Scenarios".  The new use cases are:

      *  Define, Publish, Query and Retrieve Content

      *  Endpoint Identification and Assessment Planning

      *  Endpoint need for remote communications.

   Posture Attribute Value Collection

      *  Posture Evaluation
      *  Mining the Database

   o  Added a listing of building blocks used Collection:  The specific posture attribute
         values to be collected are identified remotely and batched for all usage scenarios.

   o  Combined
         collection during the following usage scenarios into "Automated Checklist
      Verification": "Organizational Software Policy Compliance",
      "Search next communication window.  If a delay is
         introduced for Signs of Infection", "Vulnerable Endpoint
      Identification", "Compromised Endpoint Identification",
      "Suspicious Endpoint Behavior", "Traditional endpoint assessment
      with stored results", "NAC/NAP connection with no stored collection to complete, results
      using an endpoint evaluator", will need to be
         batched and "NAC/NAP connection with no transmitted.

   Posture Attribute Value Query:  Previously collected posture
         attribute values will be stored results using in a third-party evaluator".

   o  Created new usage scenario "Identification and Retrieval of
      Repository Content" by combining remote data store for use
         at the following usage scenarios:
      "Repository Interaction - A Full Assessment" university.

   Evaluation Guidance Acquisition:  Due to intermittent communication
         windows and "Repository
      Interaction - Filtered Delta Assessment"

   o  Renamed "Register with repository for immediate notification of
      new security vulnerability content that match a selection filter" bandwidth constraints, changes to "Content Change Detection" evaluation
         guidance will need to batched and generalized transmitted during the description next
         communication window.  Guidance will need to be neutral cached locally
         to implementation approaches.

   o  Removed out-of-scope usage scenarios: "Remediation avoid the need for remote communications.

   Posture Attribute Evaluation:  Due to the caching of posture
         attribute values and Mitigation" evaluation guidance, evaluation may be
         performed at both the university campus as well as the
         satellite site.

   This usage scenario highlights the need to support low-bandwidth,
   intermittent, or high-latency links.

2.2.6.  Identification and "Direct Human Retrieval of Ancillary Materials"

   Updated acknowledgements Guidance

   In preparation for performing an assessment, an operator or
   application will need to recognize those identify one or more security automation
   data stores that helped with editing
   the use case text.

6.6.  -03- to -04-

   Added four new use cases regarding content repository.

6.7.  -02- to -03-

   Expanded the workflow description based on ML input.

   Changed contain the ambiguous "assess" guidance entries necessary to better separate perform
   data collection
   from evaluation.

   Added use case for Search for Signs of Infection.

   Added use case for Remediation and Mitigation.

   Added use case for Endpoint Information Analysis and Reporting.

   Added use case for Asynchronous Compliance/Vulnerability Assessment
   at Ice Station Zebra.

   Added use case for Traditional endpoint assessment with stored
   results.

   Added use case for NAC/NAP connection with no stored results using an
   endpoint evaluator.

   Added use case for NAC/NAP connection with no stored results using evaluation tasks.  The location of a given
   guidance entry will either be known a
   third-party evaluator.

   Added use case for Compromised Endpoint Identification.

   Added use case for Suspicious Endpoint Behavior.

   Added use case for Vulnerable Endpoint Identification.

   Updated Acknowledgements

6.8.  -01- priori or known security
   automation data stores will need to -02-

   Changed title

   removed section 4, expecting be queried to retrieve applicable
   guidance.

   To query guidance it will be moved into the requirements
   document.

   removed the list of proposed capabilities from section 3.1

   Added empty sections for Search for Signs of Infection, Remediation
   and Mitigation, and Endpoint Information Analysis and Reporting.

   Removed Requirements Language section and rfc2119 reference.

   Removed unused references (which ended up being all references).

6.9.  -00- to -01-

   o  Work on this revision has been focused on document content
      relating primarily necessary to use define a set of asset management data search
   criteria.  This criteria will often utilize a logical combination of
   publication metadata (e.g., publishing identity, create time,
   modification time) and functions.

   o  Made significant updates criteria elements specific to section 3 including:

      *  Reworked introductory text.

      *  Replaced the single example with multiple use cases that focus
         on guidance
   data.  Once the criteria are defined, one or more discrete uses of asset management security automation
   data stores will need to support
         hardware and software inventory, and configuration management
         use cases.

      *  For one be queried, thus generating a result set.
   Depending on how the results are used, it may be desirable to return
   the matching guidance directly, a snippet of the use cases, added mapping guidance matching
   the query, or a resolvable location to functional
         capabilities used.  If popular, this retrieve the data at a later
   time.  The guidance matching the query will be added restricted based on
   the authorized level of access allowed to the other
         use cases as well.

      *  Additional use cases will be added in requester.

   If the next revision
         capturing additional discussion from location of guidance is identified in the list.

   o  Made significant updates to section 4 including:

      *  Renamed query result set,
   the section heading from "Use Cases" guidance will be retrieved when needed using one or more data
   retrieval requests.  A variation on this approach would be to "Functional
         Capabilities" since use cases are covered in section 3.  This
         section now extrapolates specific functions
   maintain a local cache of previously retrieved data.  In this case,
   only guidance that are needed to
         support the use cases.

      *  Started work is determined to flatten the section, moving select subsections
         up be stale by some measure will be
   retrieved from under asset management.

      *  Removed the subsections for: Asset Discovery, Endpoint
         Components and Asset Composition, Asset Resources, remote data store.

   Alternately, guidance can be discovered by iterating over data
   published with a given context within a security automation data
   store.  Specific guidance can be selected and Asset
         Life Cycle.

      *  Renamed retrieved as needed.

   This usage scenario employs the subsection "Asset Representation Reconciliation" following building blocks defined in
   Section 2.1.1 above:

   Data Query:  Enables an operator or application to
         "Deconfliction query one or more
         security automation data stores for guidance using a set of Asset Identities".

      *  Expanded
         specified criteria.

   Data Retrieval:  If data locations are returned in the subsections for: Asset Identification, Asset
         Characterization, query result
         set, then specific guidance entries can be retrieved and Deconfliction of Asset Identities.

      *  Added a new subsection for Asset Targeting.

      *  Moved remaining sections to "Other Unedited Content" for future
         updating.

6.10.  draft-waltermire-sacm-use-cases-05 to draft-ietf-sacm-use-
       cases-00

   o  Transitioned from individual I/D
         possibly cached locally.

2.2.7.  Guidance Change Detection

   An operator or application may need to WG I/D based on WG consensus
      call.

   o  Fixed identify new, updated, or
   deleted guidance in a number of spelling errors.  Thank you Erik!

   o  Added keywords to the front matter.

   o  Removed the terminology section from the draft.  Terms security automation data store for which they
   have been
      moved to: draft-dbh-sacm-terminology-00

   o  Removed requirements authorized to access.  This may be moved into achieved by querying or
   iterating over guidance in a new I/D.

   o  Extracted the functionality from the examples and security automation data store, or
   through a notification mechanism that generates alerts when changes
   are made the
      examples less prominent.

   o  Renamed "Functional Capabilities and Requirements" section to "Use
      Cases".

      *  Reorganized the "Asset Management" sub-section.  Added new text
         throughout.

         +  Renamed a few sub-section headings.

         +  Added text to the "Asset Characterization" sub-section.

   o  Renamed "Security Configuration Management" to "Endpoint
      Configuration Management".  Not sure if a security automation data store.

   Once guidance changes have been determined, data collection and
   evaluation activities may be triggered.

   This usage scenario employs the "security" distinction
      is important.

      *  Added new sections, partially integrated existing content.

      *  Additional text is needed following building blocks defined in all of the sub-sections.

   o  Changed "Security Change Management" to "Endpoint Posture
   Section 2.1.1 above:

   Data Change
      Management".  Added new skeletal outline sections for future
      updates.

6.11.  waltermire -04- Detection:  Allows an operator or application to -05-

   o  Are we including user activities and behavior identify
         guidance changes in the scope of this
      work?  That seems a security automation data store for which
         they have been authorized to access.

   Data Retrieval:  If data locations are provided by the change
         detection mechanism, then specific guidance entries can be layer 8 stuff,
         retrieved and possibly cached locally.

3.  Security Considerations

   This memo documents, for informational purposes, use cases for
   security automation.  Specific security and privacy considerations
   will be provided in related documents (e.g., requirements,
   architecture, information model, data model, protocol) as appropriate
   to an IDS/IPS
      application, not Internet stuff.

   o  Removed the references to what the WG will do because this belongs function described in the charter, not the (potentially long-lived) use cases each related document.  I removed mention of charter objectives because the
      charter may go through multiple iterations over time; there is a
      website

   One consideration for hosting the charter; this document security automation is not the correct
      place for that discussion.

   o  Moved a malicious actor
   could use the discussion of NIST specifications security automation infrastructure and related
   collected data to the
      acknowledgements section.

   o  Removed the portion gain access to an item of the introduction interest.  This may
   include personal data, private keys, software and configuration state
   that describes can be used to inform an attack against the
      chapters; we have a table of concepts, network and the existing text
      seemed redundant.

   o  Removed marketing claims, to focus on technical concepts
   endpoints, and
      technical analysis, other sensitive information.  It is important that would enable subsequent engineering
      effort.

   o  Removed (commented out in XML) UC2 and UC3,
   security and eliminated some
      text that referred privacy considerations in the related documents indicate
   methods to these use cases.

   o  Modified IANA both identify and Security Consideration sections.

   o  Moved Terms to prevent such activity.

   For consideration are means for protecting the front, so we can use them in communications as well
   as the subsequent
      text.

   o  Removed systems that store the "Key Concepts" section, since information.  For communications
   between the concepts of ORM and
      IRM were not otherwise mentioned in varying SACM components, there should be considerations
   for protecting the document.  This would seem
      more appropriate confidentiality, data integrity, and peer entity
   authentication.  For exchanged information, there should be a means
   to authenticate the arch doc rather than use cases.

   o  Removed role=editor from David Waltermire's info, since there are
      three editors on origin of the document.  The editor information.  This is most important when
      one person writes the document that represents
   where tracking the work provenance of
      multiple people.  When there are three editors, this role marking
      isn't necessary.

   o  Modified text to describe data is needed.  Also, for any
   systems that this was specific to enterprises,
      and store information that it was expected could be used for unauthorized or
   malicious purposes, methods to overlap with service provider use
      cases, and described the context of this scoped work within a
      larger context of policy enforcement, identify and verification.

   o  The document had asset management, but the charter mentioned
      asset, change, configuration, protect against
   unauthorized usage, inappropriate usage, and vulnerability management, so I
      added sections for each denial of those categories.

   o  Added text service need
   to Introduction explaining goal of the document.

   o  Added sections on various example use cases for asset management,
      config management, change management, and vulnerability
      management.

7. be considered.

4.  Informative References

   [RFC3444]  Pras, A. and J. Schoenwaelder, "On the Difference between
              Information Models and Data Models", RFC 3444, DOI
              10.17487/RFC3444, January
              2003. 2003,
              <http://www.rfc-editor.org/info/rfc3444>.

   [RFC3954]  Claise, B., Ed., "Cisco Systems NetFlow Services Export
              Version 9", RFC 3954, DOI 10.17487/RFC3954, October 2004,
              <http://www.rfc-editor.org/info/rfc3954>.

Acknowledgements

   Adam Montville edited early versions of this document.

   Kathleen Moriarty and Stephen Hanna contributed text describing the
   scope of the document.

   Gunnar Engelbach, Steve Hanna, Chris Inacio, Kent Landfield, Lisa
   Lorenzin, Adam Montville, Kathleen Moriarty, Nancy Cam-Winget, and
   Aron Woland provided text about the use cases for various revisions
   of this document.

Authors' Addresses

   David Waltermire
   National Institute of Standards and Technology
   100 Bureau Drive
   Gaithersburg, Maryland  20877
   USA
   United States

   Email: david.waltermire@nist.gov

   David Harrington
   Effective Software
   50 Harding Rd
   Portsmouth, NH New Hampshire  03801
   USA
   United States

   Email: ietfdbh@comcast.net