Dynamic Security Policy Learning

Recent research [1,2] has suggested traditional top down security policy models were too rigid to cope with changes in dynamic operational environments. There is a need for more flexibility in security policy to protect the information and yet still satisfy the operational needs. Previous work has shown that a security policy can be learnt from examples using machine learning techniques. Given a set of criteria of concern, one can apply these techniques to learn the policy that best fits the criteria. These criteria can be expressed in terms of high level objectives, or characterized by the set of previously seen decision examples. We argue here that even if an optimal policy could be learnt automatically, it will eventually become sub-optimal over time as the operational environment changes. In other words, the policy needs to be continually updated to maintain its optimality. In this paper, we review the requirements for dynamic learning and propose a dynamic policy learning framework.

By: Yow Tzu Lim; Pau Chen Cheng; Pankaj Rohatgi; John A Clark

Published in: RC24865 in 2009

LIMITED DISTRIBUTION NOTICE:

This Research Report is available. This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). I have read and understand this notice and am a member of the scientific community outside or inside of IBM seeking a single copy only.

rc24865.pdf

Questions about this service can be mailed to reports@us.ibm.com .