Merge remote-tracking branch 'origin/topic/script-reference'

* origin/topic/script-reference: (50 commits)
  A few updates for the FAQ.
  Fixing some doc warnings.
  Forgot to add protocol identifier support for TLS 1.2
  Finished SSL & syslog autodocs.
  Adding the draft SSL extension type next_protocol_negotiation.
  Fix some documentation errors.
  Tweaks.
  A set of script-reference polishing.
  fixed a couple typos in comments
  Add summary documentation to bif files.
  Add ssl and syslog script documentation
  Add Conn and DNS protocol script documentation. (fixes #731)
  Small updates to the default local.bro.
  Documentation updates for HTTP & IRC scripts.
  SSH&FTP Documentation updates.
  Fixing a warning from the documentation generation.
  This completes framework documentation package 4.
  Minor notice documentation tweaks.
  Fix some malformed Broxygen xref roles.
  Minor doc tweaks to init-bare.bro.
  ...

Conflicts:
	aux/broccoli
	aux/broctl
	src/bro.bif
	src/strings.bif

Includes:

    - Updated baselines for autodoc tests.
    - Now excluding stats.bro from external texts, it's not stable.
This commit is contained in:
Robin Sommer 2012-01-10 14:00:44 -08:00
commit 3d2dc5f5fc
116 changed files with 15124 additions and 3925 deletions

View file

@ -1,4 +1,4 @@
Copyright (c) 1995-2011, The Regents of the University of California Copyright (c) 1995-2012, The Regents of the University of California
through the Lawrence Berkeley National Laboratory and the through the Lawrence Berkeley National Laboratory and the
International Computer Science Institute. All rights reserved. International Computer Science Institute. All rights reserved.

View file

@ -28,6 +28,7 @@ installation time:
Bro also needs the following tools, but on most systems they will Bro also needs the following tools, but on most systems they will
already come preinstalled: already come preinstalled:
* Bash (For Bro Control).
* BIND8 (headers and libraries) * BIND8 (headers and libraries)
* Bison (GNU Parser Generator) * Bison (GNU Parser Generator)
* Flex (Fast Lexical Analyzer) * Flex (Fast Lexical Analyzer)
@ -74,10 +75,8 @@ Running Bro
=========== ===========
Bro is a complex program and it takes a bit of time to get familiar Bro is a complex program and it takes a bit of time to get familiar
with it. A good place for newcomers to start is the Quickstart Guide at with it. A good place for newcomers to start is the Quickstart Guide
at http://www.bro-ids.org/documentation/quickstart.bro.html.
http://www.bro-ids.org/documentation/quickstart.bro.html
For developers that wish to run Bro directly from the ``build/`` For developers that wish to run Bro directly from the ``build/``
directory (i.e., without performing ``make install``), they will have directory (i.e., without performing ``make install``), they will have

1
doc/.gitignore vendored
View file

@ -1 +1,2 @@
html html
*.pyc

1
doc/_static/960.css vendored Normal file

File diff suppressed because one or more lines are too long

513
doc/_static/basic.css vendored Normal file
View file

@ -0,0 +1,513 @@
/*
* basic.css
* ~~~~~~~~~
*
* Sphinx stylesheet -- basic theme.
*
* :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
/* -- main layout ----------------------------------------------------------- */
div.clearer {
clear: both;
}
/* -- relbar ---------------------------------------------------------------- */
div.related {
width: 100%;
font-size: 90%;
}
div.related h3 {
display: none;
}
div.related ul {
margin: 0;
padding: 0 0 0 10px;
list-style: none;
}
div.related li {
display: inline;
}
div.related li.right {
float: right;
margin-right: 5px;
}
/* -- sidebar --------------------------------------------------------------- */
div.sphinxsidebarwrapper {
padding: 10px 5px 0 10px;
}
div.sphinxsidebar {
float: left;
width: 230px;
margin-left: -100%;
font-size: 90%;
}
div.sphinxsidebar ul {
list-style: none;
}
div.sphinxsidebar ul ul,
div.sphinxsidebar ul.want-points {
margin-left: 20px;
list-style: square;
}
div.sphinxsidebar ul ul {
margin-top: 0;
margin-bottom: 0;
}
div.sphinxsidebar form {
margin-top: 10px;
}
div.sphinxsidebar input {
border: 1px solid #98dbcc;
font-family: sans-serif;
font-size: 1em;
}
div.sphinxsidebar input[type="text"] {
width: 170px;
}
div.sphinxsidebar input[type="submit"] {
width: 30px;
}
img {
border: 0;
}
/* -- search page ----------------------------------------------------------- */
ul.search {
margin: 10px 0 0 20px;
padding: 0;
}
ul.search li {
padding: 5px 0 5px 20px;
background-image: url(file.png);
background-repeat: no-repeat;
background-position: 0 7px;
}
ul.search li a {
font-weight: bold;
}
ul.search li div.context {
color: #888;
margin: 2px 0 0 30px;
text-align: left;
}
ul.keywordmatches li.goodmatch a {
font-weight: bold;
}
/* -- index page ------------------------------------------------------------ */
table.contentstable {
width: 90%;
}
table.contentstable p.biglink {
line-height: 150%;
}
a.biglink {
font-size: 1.3em;
}
span.linkdescr {
font-style: italic;
padding-top: 5px;
font-size: 90%;
}
/* -- general index --------------------------------------------------------- */
table.indextable {
width: 100%;
}
table.indextable td {
text-align: left;
vertical-align: top;
}
table.indextable dl, table.indextable dd {
margin-top: 0;
margin-bottom: 0;
}
table.indextable tr.pcap {
height: 10px;
}
table.indextable tr.cap {
margin-top: 10px;
background-color: #f2f2f2;
}
img.toggler {
margin-right: 3px;
margin-top: 3px;
cursor: pointer;
}
div.modindex-jumpbox {
border-top: 1px solid #ddd;
border-bottom: 1px solid #ddd;
margin: 1em 0 1em 0;
padding: 0.4em;
}
div.genindex-jumpbox {
border-top: 1px solid #ddd;
border-bottom: 1px solid #ddd;
margin: 1em 0 1em 0;
padding: 0.4em;
}
/* -- general body styles --------------------------------------------------- */
a.headerlink {
visibility: hidden;
}
div.body p.caption {
text-align: inherit;
}
div.body td {
text-align: left;
}
.field-list ul {
padding-left: 1em;
}
.first {
margin-top: 0 !important;
}
p.rubric {
margin-top: 30px;
font-weight: bold;
}
img.align-left, .figure.align-left, object.align-left {
clear: left;
float: left;
margin-right: 1em;
}
img.align-right, .figure.align-right, object.align-right {
clear: right;
float: right;
margin-left: 1em;
}
img.align-center, .figure.align-center, object.align-center {
display: block;
margin-left: auto;
margin-right: auto;
}
.align-left {
text-align: left;
}
.align-center {
text-align: center;
}
.align-right {
text-align: right;
}
/* -- sidebars -------------------------------------------------------------- */
div.sidebar {
margin: 0 0 0.5em 1em;
border: 1px solid #ddb;
padding: 7px 7px 0 7px;
background-color: #ffe;
width: 40%;
float: right;
}
p.sidebar-title {
font-weight: bold;
}
/* -- topics ---------------------------------------------------------------- */
div.topic {
border: 1px solid #ccc;
padding: 7px 7px 0 7px;
margin: 10px 0 10px 0;
}
p.topic-title {
font-size: 1.1em;
font-weight: bold;
margin-top: 10px;
}
/* -- admonitions ----------------------------------------------------------- */
div.admonition {
margin-top: 10px;
margin-bottom: 10px;
padding: 7px;
}
div.admonition dt {
font-weight: bold;
}
div.admonition dl {
margin-bottom: 0;
}
p.admonition-title {
margin: 0px 10px 5px 0px;
font-weight: bold;
}
div.body p.centered {
text-align: center;
margin-top: 25px;
}
/* -- tables ---------------------------------------------------------------- */
table.field-list td, table.field-list th {
border: 0 !important;
}
table.footnote td, table.footnote th {
border: 0 !important;
}
th {
text-align: left;
padding-right: 5px;
}
table.citation {
border-left: solid 1px gray;
margin-left: 1px;
}
table.citation td {
border-bottom: none;
}
/* -- other body styles ----------------------------------------------------- */
ol.arabic {
list-style: decimal;
}
ol.loweralpha {
list-style: lower-alpha;
}
ol.upperalpha {
list-style: upper-alpha;
}
ol.lowerroman {
list-style: lower-roman;
}
ol.upperroman {
list-style: upper-roman;
}
dd p {
margin-top: 0px;
}
dd ul, dd table {
margin-bottom: 10px;
}
dd {
margin-top: 3px;
margin-bottom: 10px;
margin-left: 30px;
}
dt:target, .highlighted {
background-color: #fbe54e;
}
dl.glossary dt {
font-weight: bold;
font-size: 1.1em;
}
.field-list ul {
margin: 0;
padding-left: 1em;
}
.field-list p {
margin: 0;
}
.refcount {
color: #060;
}
.optional {
font-size: 1.3em;
}
.versionmodified {
font-style: italic;
}
.system-message {
background-color: #fda;
padding: 5px;
border: 3px solid red;
}
.footnote:target {
background-color: #ffa;
}
.line-block {
display: block;
margin-top: 1em;
margin-bottom: 1em;
}
.line-block .line-block {
margin-top: 0;
margin-bottom: 0;
margin-left: 1.5em;
}
.guilabel, .menuselection {
font-family: sans-serif;
}
.accelerator {
text-decoration: underline;
}
.classifier {
font-style: oblique;
}
abbr, acronym {
border-bottom: dotted 1px;
cursor: help;
}
/* -- code displays --------------------------------------------------------- */
pre {
overflow: auto;
overflow-y: hidden; /* fixes display issues on Chrome browsers */
}
td.linenos pre {
padding: 5px 0px;
border: 0;
background-color: transparent;
color: #aaa;
}
table.highlighttable {
margin-left: 0.5em;
}
table.highlighttable td {
padding: 0 0.5em 0 0.5em;
}
tt.descname {
background-color: transparent;
font-weight: bold;
# font-size: 1.2em;
}
tt.descclassname {
background-color: transparent;
}
tt.xref, a tt {
background-color: transparent;
# font-weight: bold;
}
h1 tt, h2 tt, h3 tt, h4 tt, h5 tt, h6 tt {
background-color: transparent;
}
.viewcode-link {
float: right;
}
.viewcode-back {
float: right;
font-family: sans-serif;
}
div.viewcode-block:target {
margin: -1px -10px;
padding: 0 10px;
}
/* -- math display ---------------------------------------------------------- */
img.math {
vertical-align: middle;
}
div.body div.math p {
text-align: center;
}
span.eqno {
float: right;
}
/* -- printout stylesheet --------------------------------------------------- */
@media print {
div.document,
div.documentwrapper,
div.bodywrapper {
margin: 0 !important;
width: 100%;
}
div.sphinxsidebar,
div.related,
div.footer,
#top-link {
display: none;
}
}

View file

@ -1,3 +1,17 @@
a.toc-backref {
color: #333;
}
h1, h2, h3, h4, h5, h6,
h1 a, h2 a, h3 a, h4 a, h5 a, h6 a {
padding:0 0 0px 0;
}
ul {
padding-bottom: 0px;
}
h1 { h1 {
font-weight: bold; font-weight: bold;
font-size: 32px; font-size: 32px;
@ -14,3 +28,133 @@ th.field-name
{ {
white-space:nowrap; white-space:nowrap;
} }
h2 {
margin-top: 50px;
padding-bottom: 5px;
margin-bottom: 30px;
border-bottom: 1px solid;
border-color: #aaa;
font-style: normal;
}
div.section h3 {
font-style: normal;
}
h3 {
font-size: 20px;
margin-top: 40px;
margin-bottom: 0¡px;
font-weight: bold;
font-style: normal;
}
h3.widgettitle {
font-style: normal;
}
h4 {
font-size:18px;
font-style: normal;
margin-bottom: 0em;
margin-top: 40px;
font-style: italic;
}
h5 {
font-size:16px;
}
h6 {
font-size:15px;
}
.toc-backref {
color: #333;
}
.contents ul {
padding-bottom: 1em;
}
dl.namespace {
display: none;
}
dl dt {
font-weight: normal;
}
table.docutils tbody {
margin: 1em 1em 1em 1em;
}
table.docutils td {
padding: 5pt 5pt 5pt 5pt;
font-size: 14px;
border-left: 0;
border-right: 0;
}
dl pre {
font-size: 14px;
}
table.docutils th {
padding: 5pt 5pt 5pt 5pt;
font-size: 14px;
font-style: normal;
border-left: 0;
border-right: 0;
}
table.docutils tr:first-child td {
#border-top: 1px solid #aaa;
}
.download {
font-family:"Courier New", Courier, mono;
font-weight: normal;
}
dt:target, .highlighted {
background-color: #ccc;
}
p {
padding-bottom: 0px;
}
p.last {
margin-bottom: 0px;
}
dl {
padding: 1em 1em 1em 1em;
background: #fffff0;
border: 1px solid #aaa;
}
dl {
margin-bottom: 10px;
}
table.docutils {
background: #fffff0;
border-collapse: collapse;
border: 1px solid #ddd;
}
dl table.docutils {
border: 0;
}
table.docutils dl {
border: 1px dashed #666;
}

0
doc/_static/broxygen-extra.js vendored Normal file
View file

437
doc/_static/broxygen.css vendored Normal file
View file

@ -0,0 +1,437 @@
/* Automatically generated. Do not edit. */
#bro-main, #bro-standalone-main {
padding: 0 0 0 0;
position:relative;
z-index:1;
}
#bro-main {
margin-bottom: 2em;
}
#bro-standalone-main {
margin-bottom: 0em;
padding-left: 50px;
padding-right: 50px;
}
#bro-outer {
color: #333;
background: #ffffff;
}
#bro-title {
font-weight: bold;
font-size: 32px;
line-height:32px;
text-align: center;
padding-top: 3px;
margin-bottom: 30px;
font-family: Palatino,'Palatino Linotype',Georgia,serif;;
color: #000;
}
.opening:first-letter {
font-size: 24px;
font-weight: bold;
letter-spacing: 0.05em;
}
.opening {
font-size: 17px;
}
.version {
text-align: right;
font-size: 12px;
color: #aaa;
line-height: 0;
height: 0;
}
.git-info-version {
position: relative;
height: 2em;
top: -1em;
color: #ccc;
float: left;
font-size: 12px;
}
.git-info-date {
position: relative;
height: 2em;
top: -1em;
color: #ccc;
float: right;
font-size: 12px;
}
body {
font-family:Arial, Helvetica, sans-serif;
font-size:15px;
line-height:22px;
color: #333;
margin: 0px;
}
h1, h2, h3, h4, h5, h6,
h1 a, h2 a, h3 a, h4 a, h5 a, h6 a {
padding:0 0 20px 0;
font-weight:bold;
text-decoration:none;
}
div.section h3, div.section h4, div.section h5, div.section h6 {
font-style: italic;
}
h1, h2 {
font-size:27px;
letter-spacing:-1px;
}
h3 {
margin-top: 1em;
font-size:18px;
}
h4 {
font-size:16px;
}
h5 {
font-size:15px;
}
h6 {
font-size:12px;
}
p {
padding:0 0 20px 0;
}
hr {
background:none;
height:1px;
line-height:1px;
border:0;
margin:0 0 20px 0;
}
ul, ol {
margin:0 20px 20px 0;
padding-left:40px;
}
ul.simple, ol.simple {
margin:0 0px 0px 0;
}
blockquote {
margin:0 0 0 40px;
}
strong, dfn {
font-weight:bold;
}
em, dfn {
font-style:italic;
}
sup, sub {
line-height:0;
}
pre {
white-space:pre;
}
pre, code, tt {
font-family:"Courier New", Courier, mono;
}
dl {
margin: 0 0 20px 0;
}
dl dt {
font-weight: bold;
}
dd {
margin:0 0 20px 20px;
}
small {
font-size:75%;
}
a:link,
a:visited,
a:active
{
color: #2a85a7;
}
a:hover
{
color:#c24444;
}
h1, h2, h3, h4, h5, h6,
h1 a, h2 a, h3 a, h4 a, h5 a, h6 a
{
color: #333;
}
hr {
border-bottom:1px solid #ddd;
}
pre {
color: #333;
background: #FFFAE2;
padding: 7px 5px 3px 5px;
margin-bottom: 25px;
margin-top: 0px;
}
ul {
padding-bottom: 5px;
}
h1, h2 {
margin-top: 30px;
}
h1 {
margin-bottom: 50px;
margin-bottom: 20px;
padding-bottom: 5px;
border-bottom: 1px solid;
border-color: #aaa;
}
h2 {
font-size: 24px;
}
pre {
-moz-box-shadow:0 0 6px #ddd;
-webkit-box-shadow:0 0 6px #ddd;
box-shadow:0 0 6px #ddd;
}
a {
text-decoration:none;
}
p {
padding-bottom: 15px;
}
p, dd, li {
text-align: justify;
}
li {
margin-bottom: 5px;
}
#footer .widget_links ul a,
#footer .widget_links ol a
{
color: #ddd;
}
#footer .widget_links ul a:hover,
#footer .widget_links ol a:hover
{
color:#c24444;
}
#footer .widget li {
padding-bottom:10px;
}
#footer .widget_links li {
padding-bottom:1px;
}
#footer .widget li:last-child {
padding-bottom:0;
}
#footer .widgettitle {
color: #ddd;
}
.widget {
margin:0 0 40px 0;
}
.widget, .widgettitle {
font-size:12px;
line-height:18px;
}
.widgettitle {
font-weight:bold;
text-transform:uppercase;
padding:0 0 10px 0;
margin:0 0 20px 0;
line-height:100%;
}
.widget UL, .widget OL {
list-style-type:none;
margin:0;
padding:0;
}
.widget p {
padding:0;
}
.widget li {
padding-bottom:10px;
}
.widget a {
text-decoration:none;
}
#bro-main .widgettitle,
{
color: #333;
}
.widget img.left {
padding:5px 10px 10px 0;
}
.widget img.right {
padding:5px 0 10px 10px;
}
.ads .widgettitle {
margin-right:16px;
}
.widget {
margin-left: 1em;
}
.widgettitle {
color: #333;
}
.widgettitle {
border-bottom:1px solid #ddd;
}
.sidebar-toc ul li {
padding-bottom: 0px;
text-align: left;
list-style-type: square;
list-style-position: inside;
padding-left: 1em;
text-indent: -1em;
}
.sidebar-toc ul li li {
margin-left: 1em;
margin-bottom: 0px;
list-style-type: square;
}
.sidebar-toc ul li li a {
font-size: 8pt;
}
.contents {
padding: 10px;
background: #FFFAE2;
margin: 20px;
}
.topic-title {
font-size: 20px;
font-weight: bold;
padding: 0px 0px 5px 0px;
text-align: center;
padding-top: .5em;
}
.contents li {
margin-bottom: 0px;
list-style-type: square;
}
.contents ul ul li {
margin-left: 0px;
padding-left: 0px;
padding-top: 0em;
font-size: 90%;
list-style-type: square;
font-weight: normal;
}
.contents ul ul ul li {
list-style-type: none;
}
.contents ul ul ul ul li {
display:none;
}
.contents ul li {
padding-top: 1em;
list-style-type: none;
font-weight: bold;
}
.contents ul {
margin-left: 0px;
padding-left: 2em;
margin: 0px 0px 0px 0px;
}
.note, .warning, .error {
margin-left: 2em;
margin-right: 2em;
margin-top: 1.5em;
margin-bottom: 1.5em;
padding: 0.5em 1em 0.5em 1em;
overflow: auto;
border-left: solid 3px #aaa;
font-size: 15px;
color: #333;
}
.admonition p {
margin-left: 1em;
}
.admonition-title {
font-size: 16px;
font-weight: bold;
color: #000;
padding-bottom: 0em;
margin-bottom: .5em;
margin-top: 0em;
}

View file

@ -1,3 +0,0 @@
$(document).ready(function() {
$('.docutils.download').removeClass('download');
});

58
doc/_static/pygments.css vendored Normal file
View file

@ -0,0 +1,58 @@
.hll { background-color: #ffffcc }
.c { color: #aaaaaa; font-style: italic } /* Comment */
.err { color: #F00000; background-color: #F0A0A0 } /* Error */
.k { color: #0000aa } /* Keyword */
.cm { color: #aaaaaa; font-style: italic } /* Comment.Multiline */
.cp { color: #4c8317 } /* Comment.Preproc */
.c1 { color: #aaaaaa; font-style: italic } /* Comment.Single */
.cs { color: #0000aa; font-style: italic } /* Comment.Special */
.gd { color: #aa0000 } /* Generic.Deleted */
.ge { font-style: italic } /* Generic.Emph */
.gr { color: #aa0000 } /* Generic.Error */
.gh { color: #000080; font-weight: bold } /* Generic.Heading */
.gi { color: #00aa00 } /* Generic.Inserted */
.go { color: #888888 } /* Generic.Output */
.gp { color: #555555 } /* Generic.Prompt */
.gs { font-weight: bold } /* Generic.Strong */
.gu { color: #800080; font-weight: bold } /* Generic.Subheading */
.gt { color: #aa0000 } /* Generic.Traceback */
.kc { color: #0000aa } /* Keyword.Constant */
.kd { color: #0000aa } /* Keyword.Declaration */
.kn { color: #0000aa } /* Keyword.Namespace */
.kp { color: #0000aa } /* Keyword.Pseudo */
.kr { color: #0000aa } /* Keyword.Reserved */
.kt { color: #00aaaa } /* Keyword.Type */
.m { color: #009999 } /* Literal.Number */
.s { color: #aa5500 } /* Literal.String */
.na { color: #1e90ff } /* Name.Attribute */
.nb { color: #00aaaa } /* Name.Builtin */
.nc { color: #00aa00; text-decoration: underline } /* Name.Class */
.no { color: #aa0000 } /* Name.Constant */
.nd { color: #888888 } /* Name.Decorator */
.ni { color: #800000; font-weight: bold } /* Name.Entity */
.nf { color: #00aa00 } /* Name.Function */
.nn { color: #00aaaa; text-decoration: underline } /* Name.Namespace */
.nt { color: #1e90ff; font-weight: bold } /* Name.Tag */
.nv { color: #aa0000 } /* Name.Variable */
.ow { color: #0000aa } /* Operator.Word */
.w { color: #bbbbbb } /* Text.Whitespace */
.mf { color: #009999 } /* Literal.Number.Float */
.mh { color: #009999 } /* Literal.Number.Hex */
.mi { color: #009999 } /* Literal.Number.Integer */
.mo { color: #009999 } /* Literal.Number.Oct */
.sb { color: #aa5500 } /* Literal.String.Backtick */
.sc { color: #aa5500 } /* Literal.String.Char */
.sd { color: #aa5500 } /* Literal.String.Doc */
.s2 { color: #aa5500 } /* Literal.String.Double */
.se { color: #aa5500 } /* Literal.String.Escape */
.sh { color: #aa5500 } /* Literal.String.Heredoc */
.si { color: #aa5500 } /* Literal.String.Interpol */
.sx { color: #aa5500 } /* Literal.String.Other */
.sr { color: #009999 } /* Literal.String.Regex */
.s1 { color: #aa5500 } /* Literal.String.Single */
.ss { color: #0000aa } /* Literal.String.Symbol */
.bp { color: #00aaaa } /* Name.Builtin.Pseudo */
.vc { color: #aa0000 } /* Name.Variable.Class */
.vg { color: #aa0000 } /* Name.Variable.Global */
.vi { color: #aa0000 } /* Name.Variable.Instance */
.il { color: #009999 } /* Literal.Number.Integer.Long */

View file

@ -1,11 +1,12 @@
{% extends "!layout.html" %} {% extends "!layout.html" %}
{% block extrahead %} {% block extrahead %}
<link rel="stylesheet" type="text/css" href="http://www.bro-ids.org/css/bro-ids.css" /> <link rel="stylesheet" type="text/css" href="{{ pathto('_static/broxygen.css', 1) }}"></script>
<link rel="stylesheet" type="text/css" href="http://www.bro-ids.org/css/960.css" /> <link rel="stylesheet" type="text/css" href="{{ pathto('_static/960.css', 1) }}"></script>
<link rel="stylesheet" type="text/css" href="http://www.bro-ids.org/css/pygments.css" /> <link rel="stylesheet" type="text/css" href="{{ pathto('_static/pygments.css', 1) }}"></script>
<link rel="stylesheet" type="text/css" href="{{ pathto('_static/broxygen-extra.css', 1) }}"></script> <link rel="stylesheet" type="text/css" href="{{ pathto('_static/broxygen-extra.css', 1) }}"></script>
<script type="text/javascript" src="{{ pathto('_static/download.js', 1) }}"></script>
<script type="text/javascript" src="{{ pathto('_static/broxygen-extra.js', 1) }}"></script>
{% endblock %} {% endblock %}
{% block header %} {% block header %}
@ -47,6 +48,7 @@
Table of Contents Table of Contents
</h3> </h3>
<p> <p>
<!-- <ul id="sidebar-toc"></ul> -->
<ul>{{toc}}</ul> <ul>{{toc}}</ul>
</p> </p>
</div> </div>

View file

@ -24,7 +24,7 @@ sys.path.insert(0, os.path.abspath('sphinx-sources/ext'))
# Add any Sphinx extension module names here, as strings. They can be extensions # Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones. # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['bro', 'rst_directive'] extensions = ['bro', 'rst_directive', 'sphinx.ext.todo', 'adapt-toc']
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ['sphinx-sources/_templates', 'sphinx-sources/_static'] templates_path = ['sphinx-sources/_templates', 'sphinx-sources/_static']
@ -40,7 +40,7 @@ master_doc = 'index'
# General information about the project. # General information about the project.
project = u'Bro' project = u'Bro'
copyright = u'2011, The Bro Project' copyright = u'2012, The Bro Project'
# The version info for the project you're documenting, acts as replacement for # The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the # |version| and |release|, also used in various other places throughout the
@ -169,6 +169,7 @@ html_sidebars = {
# Output file base name for HTML help builder. # Output file base name for HTML help builder.
htmlhelp_basename = 'Broxygen' htmlhelp_basename = 'Broxygen'
html_add_permalinks = None
# -- Options for LaTeX output -------------------------------------------------- # -- Options for LaTeX output --------------------------------------------------
@ -208,7 +209,6 @@ latex_documents = [
# If false, no module index is generated. # If false, no module index is generated.
#latex_domain_indices = True #latex_domain_indices = True
# -- Options for manual page output -------------------------------------------- # -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples # One entry per manual page. List of tuples
@ -217,3 +217,6 @@ man_pages = [
('index', 'bro', u'Bro Documentation', ('index', 'bro', u'Bro Documentation',
[u'The Bro Project'], 1) [u'The Bro Project'], 1)
] ]
# -- Options for todo plugin --------------------------------------------
todo_include_todos=True

29
doc/ext/adapt-toc.py Normal file
View file

@ -0,0 +1,29 @@
import sys
import re
# Removes the first TOC level, which is just the page title.
def process_html_toc(app, pagename, templatename, context, doctree):
if not "toc" in context:
return
toc = context["toc"]
lines = toc.strip().split("\n")
lines = lines[2:-2]
toc = "\n".join(lines)
toc = "<ul>" + toc
context["toc"] = toc
# print >>sys.stderr, pagename
# print >>sys.stderr, context["toc"]
# print >>sys.stderr, "-----"
# print >>sys.stderr, toc
# print >>sys.stderr, "===="
def setup(app):
app.connect('html-page-context', process_html_toc)

View file

@ -72,34 +72,30 @@ Usage
How can I identify backscatter? How can I identify backscatter?
------------------------------- -------------------------------
Identifying backscatter via connections labeled as ``OTH`` is not Identifying backscatter via connections labeled as ``OTH`` is not a reliable
a reliable means to detect backscatter. Use rather the following means to detect backscatter. Backscatter is however visible by interpreting
procedure: the contents of the ``history`` field in the ``conn.log`` file. The basic idea
is to watch for connections that never had an initial ``SYN`` but started
* Enable connection history via ``redef record_state_history=T`` to instead with a ``SYN-ACK`` or ``RST`` (though this latter generally is just
track all control/data packet types in connection logs. discarded). Here are some history fields which provide backscatter examples:
``hAFf``, ``r``. Refer to the conn protocol analysis scripts to interpret the
* Backscatter is now visible in terms of connections that never had an individual character meanings in the history field.
initial ``SYN`` but started instead with a ``SYN-ACK`` or ``RST``
(though this latter generally is just discarded).
Is there help for understanding Bro's resource consumption? Is there help for understanding Bro's resource consumption?
----------------------------------------------------------- -----------------------------------------------------------
There are two scripts that collect statistics on resource usage: There are two scripts that collect statistics on resource usage:
``stats.bro`` and ``profiling.bro``. The former is quite lightweight, ``misc/stats.bro`` and ``misc/profiling.bro``. The former is quite
while the latter should only be used for debugging. Furthermore, lightweight, while the latter should only be used for debugging.
there's also ``print-globals.bro``, which prints the size of all
global script variable at termination.
How can I capture packets as an unprivileged user? How can I capture packets as an unprivileged user?
-------------------------------------------------- --------------------------------------------------
Normally, unprivileged users cannot capture packets from a network Normally, unprivileged users cannot capture packets from a network interface,
interface, which means they would not be able to use Bro to read/analyze which means they would not be able to use Bro to read/analyze live traffic.
live traffic. However, there are ways to enable packet capture However, there are operating system specific ways to enable packet capture
permission for non-root users, which is worth doing in the context of permission for non-root users, which is worth doing in the context of using
using Bro to monitor live traffic Bro to monitor live traffic.
With Linux Capabilities With Linux Capabilities
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^

View file

@ -43,13 +43,14 @@ Basics
====== ======
The data fields that a stream records are defined by a record type The data fields that a stream records are defined by a record type
specified when it is created. Let's look at the script generating specified when it is created. Let's look at the script generating Bro's
Bro's connection summaries as an example, connection summaries as an example,
``base/protocols/conn/main.bro``. It defines a record ``Conn::Info`` :doc:`scripts/base/protocols/conn/main`. It defines a record
that lists all the fields that go into ``conn.log``, each marked with :bro:type:`Conn::Info` that lists all the fields that go into
a ``&log`` attribute indicating that it is part of the information ``conn.log``, each marked with a ``&log`` attribute indicating that it
written out. To write a log record, the script then passes an instance is part of the information written out. To write a log record, the
of ``Conn::Info`` to the logging framework's ``Log::write`` function. script then passes an instance of :bro:type:`Conn::Info` to the logging
framework's :bro:id:`Log::write` function.
By default, each stream automatically gets a filter named ``default`` By default, each stream automatically gets a filter named ``default``
that generates the normal output by recording all record fields into a that generates the normal output by recording all record fields into a
@ -66,7 +67,7 @@ To create new a new output file for an existing stream, you can add a
new filter. A filter can, e.g., restrict the set of fields being new filter. A filter can, e.g., restrict the set of fields being
logged: logged:
.. code:: bro: .. code:: bro
event bro_init() event bro_init()
{ {
@ -85,14 +86,15 @@ Note the fields that are set for the filter:
``path`` ``path``
The filename for the output file, without any extension (which The filename for the output file, without any extension (which
may be automatically added by the writer). Default path values may be automatically added by the writer). Default path values
are generated by taking the stream's ID and munging it are generated by taking the stream's ID and munging it slightly.
slightly. ``Conn::LOG`` is converted into ``conn``, :bro:enum:`Conn::LOG` is converted into ``conn``,
``PacketFilter::LOG`` is converted into ``packet_filter``, and :bro:enum:`PacketFilter::LOG` is converted into
``Notice::POLICY_LOG`` is converted into ``notice_policy``. ``packet_filter``, and :bro:enum:`Notice::POLICY_LOG` is
converted into ``notice_policy``.
``include`` ``include``
A set limiting the fields to the ones given. The names A set limiting the fields to the ones given. The names
correspond to those in the ``Conn::LOG`` record, with correspond to those in the :bro:type:`Conn::Info` record, with
sub-records unrolled by concatenating fields (separated with sub-records unrolled by concatenating fields (separated with
dots). dots).
@ -158,10 +160,10 @@ further for example to log information by subnets or even by IP
address. Be careful, however, as it is easy to create many files very address. Be careful, however, as it is easy to create many files very
quickly ... quickly ...
.. sidebar: .. sidebar:: A More Generic Path Function
The show ``split_log`` method has one draw-back: it can be used The ``split_log`` method has one draw-back: it can be used
only with the ``Conn::Log`` stream as the record type is hardcoded only with the :bro:enum:`Conn::LOG` stream as the record type is hardcoded
into its argument list. However, Bro allows to do a more generic into its argument list. However, Bro allows to do a more generic
variant: variant:
@ -201,8 +203,8 @@ Extending
You can add further fields to a log stream by extending the record You can add further fields to a log stream by extending the record
type that defines its content. Let's say we want to add a boolean type that defines its content. Let's say we want to add a boolean
field ``is_private`` to ``Conn::Info`` that indicates whether the field ``is_private`` to :bro:type:`Conn::Info` that indicates whether the
originator IP address is part of the RFC1918 space: originator IP address is part of the :rfc:`1918` space:
.. code:: bro .. code:: bro
@ -234,10 +236,10 @@ Notes:
- For extending logs this way, one needs a bit of knowledge about how - For extending logs this way, one needs a bit of knowledge about how
the script that creates the log stream is organizing its state the script that creates the log stream is organizing its state
keeping. Most of the standard Bro scripts attach their log state to keeping. Most of the standard Bro scripts attach their log state to
the ``connection`` record where it can then be accessed, just as the the :bro:type:`connection` record where it can then be accessed, just
``c$conn`` above. For example, the HTTP analysis adds a field ``http as the ``c$conn`` above. For example, the HTTP analysis adds a field
: HTTP::Info`` to the ``connection`` record. See the script ``http`` of type :bro:type:`HTTP::Info` to the :bro:type:`connection`
reference for more information. record. See the script reference for more information.
- When extending records as shown above, the new fields must always be - When extending records as shown above, the new fields must always be
declared either with a ``&default`` value or as ``&optional``. declared either with a ``&default`` value or as ``&optional``.
@ -251,8 +253,8 @@ Sometimes it is helpful to do additional analysis of the information
being logged. For these cases, a stream can specify an event that will being logged. For these cases, a stream can specify an event that will
be generated every time a log record is written to it. All of Bro's be generated every time a log record is written to it. All of Bro's
default log streams define such an event. For example, the connection default log streams define such an event. For example, the connection
log stream raises the event ``Conn::log_conn(rec: Conn::Info)``: You log stream raises the event :bro:id:`Conn::log_conn`. You
could use that for example for flagging when an a connection to could use that for example for flagging when a connection to
specific destination exceeds a certain duration: specific destination exceeds a certain duration:
.. code:: bro .. code:: bro
@ -279,11 +281,32 @@ real-time.
Rotation Rotation
-------- --------
By default, no log rotation occurs, but it's globally controllable for all
filters by redefining the :bro:id:`Log::default_rotation_interval` option:
.. code:: bro
redef Log::default_rotation_interval = 1 hr;
Or specifically for certain :bro:type:`Log::Filter` instances by setting
their ``interv`` field. Here's an example of changing just the
:bro:enum:`Conn::LOG` stream's default filter rotation.
.. code:: bro
event bro_init()
{
local f = Log::get_filter(Conn::LOG, "default");
f$interv = 1 min;
Log::remove_filter(Conn::LOG, "default");
Log::add_filter(Conn::LOG, f);
}
ASCII Writer Configuration ASCII Writer Configuration
-------------------------- --------------------------
The ASCII writer has a number of options for customizing the format of The ASCII writer has a number of options for customizing the format of
its output, see XXX.bro. its output, see :doc:`scripts/base/frameworks/logging/writers/ascii`.
Adding Streams Adding Streams
============== ==============
@ -321,8 +344,8 @@ example for the ``Foo`` module:
Log::create_stream(Foo::LOG, [$columns=Info, $ev=log_foo]); Log::create_stream(Foo::LOG, [$columns=Info, $ev=log_foo]);
} }
You can also the state to the ``connection`` record to make it easily You can also add the state to the :bro:type:`connection` record to make
accessible across event handlers: it easily accessible across event handlers:
.. code:: bro .. code:: bro
@ -330,7 +353,7 @@ accessible across event handlers:
foo: Info &optional; foo: Info &optional;
} }
Now you can use the ``Log::write`` method to output log records and Now you can use the :bro:id:`Log::write` method to output log records and
save the logged ``Foo::Info`` record into the connection record: save the logged ``Foo::Info`` record into the connection record:
.. code:: bro .. code:: bro
@ -343,9 +366,9 @@ save the logged ``Foo::Info`` record into the connection record:
} }
See the existing scripts for how to work with such a new connection See the existing scripts for how to work with such a new connection
field. A simple example is ``base/protocols/syslog/main.bro``. field. A simple example is :doc:`scripts/base/protocols/syslog/main`.
When you are developing scripts that add data to the ``connection`` When you are developing scripts that add data to the :bro:type:`connection`
record, care must be given to when and how long data is stored. record, care must be given to when and how long data is stored.
Normally data saved to the connection record will remain there for the Normally data saved to the connection record will remain there for the
duration of the connection and from a practical perspective it's not duration of the connection and from a practical perspective it's not

View file

@ -31,13 +31,13 @@ See the `bro downloads page`_ for currently supported/targeted platforms.
* RPM * RPM
.. console:: .. console::
sudo yum localinstall Bro-all*.rpm sudo yum localinstall Bro-all*.rpm
* DEB * DEB
.. console:: .. console::
sudo gdebi Bro-all-*.deb sudo gdebi Bro-all-*.deb
@ -56,15 +56,17 @@ Building From Source
Required Dependencies Required Dependencies
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~
The following dependencies are required to build Bro:
* RPM/RedHat-based Linux: * RPM/RedHat-based Linux:
.. console:: .. console::
sudo yum install cmake make gcc gcc-c++ flex bison libpcap-devel openssl-devel python-devel swig zlib-devel file-devel sudo yum install cmake make gcc gcc-c++ flex bison libpcap-devel openssl-devel python-devel swig zlib-devel file-devel
* DEB/Debian-based Linux: * DEB/Debian-based Linux:
.. console:: .. console::
sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev libmagic-dev sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev libmagic-dev
@ -73,9 +75,13 @@ Required Dependencies
Most required dependencies should come with a minimal FreeBSD install Most required dependencies should come with a minimal FreeBSD install
except for the following. except for the following.
.. console:: .. console::
sudo pkg_add -r cmake swig bison python sudo pkg_add -r bash cmake swig bison python
Note that ``bash`` needs to be in ``PATH``, which by default it is
not. The FreeBSD package installs the binary into
``/usr/local/bin``.
* Mac OS X * Mac OS X
@ -99,19 +105,19 @@ sending emails.
* RPM/RedHat-based Linux: * RPM/RedHat-based Linux:
.. console:: .. console::
sudo yum install GeoIP-devel sendmail sudo yum install GeoIP-devel sendmail
* DEB/Debian-based Linux: * DEB/Debian-based Linux:
.. console:: .. console::
sudo apt-get install libgeoip-dev sendmail sudo apt-get install libgeoip-dev sendmail
* Ports-based FreeBSD * Ports-based FreeBSD
.. console:: .. console::
sudo pkg_add -r GeoIP sudo pkg_add -r GeoIP

View file

@ -6,113 +6,630 @@ Types
The Bro scripting language supports the following built-in types. The Bro scripting language supports the following built-in types.
.. TODO: add documentation
.. bro:type:: void .. bro:type:: void
An internal Bro type representing an absence of a type. Should
most often be seen as a possible function return type.
.. bro:type:: bool .. bro:type:: bool
Reflects a value with one of two meanings: true or false. The two
``bool`` constants are ``T`` and ``F``.
.. bro:type:: int .. bro:type:: int
A numeric type representing a signed integer. An ``int`` constant
is a string of digits preceded by a ``+`` or ``-`` sign, e.g.
``-42`` or ``+5``. When using type inferencing use care so that the
intended type is inferred, e.g. ``local size_difference = 0`` will
infer the :bro:type:`count` while ``local size_difference = +0``
will infer :bro:type:`int`.
.. bro:type:: count .. bro:type:: count
A numeric type representing an unsigned integer. A ``count``
constant is a string of digits, e.g. ``1234`` or ``0``.
.. bro:type:: counter .. bro:type:: counter
An alias to :bro:type:`count`
.. TODO: is there anything special about this type?
.. bro:type:: double .. bro:type:: double
A numeric type representing a double-precision floating-point
number. Floating-point constants are written as a string of digits
with an optional decimal point, optional scale-factor in scientific
notation, and optional ``+`` or ``-`` sign. Examples are ``-1234``,
``-1234e0``, ``3.14159``, and ``.003e-23``.
.. bro:type:: time .. bro:type:: time
A temporal type representing an absolute time. There is currently
no way to specify a ``time`` constant, but one can use the
:bro:id:`current_time` or :bro:id:`network_time` built-in functions
to assign a value to a ``time``-typed variable.
.. bro:type:: interval .. bro:type:: interval
A temporal type representing a relative time. An ``interval``
constant can be written as a numeric constant followed by a time
unit where the time unit is one of ``usec``, ``sec``, ``min``,
``hr``, or ``day`` which respectively represent microseconds,
seconds, minutes, hours, and days. Whitespace between the numeric
constant and time unit is optional. Appending the letter "s" to the
time unit in order to pluralize it is also optional (to no semantic
effect). Examples of ``interval`` constants are ``3.5 min`` and
``3.5mins``. An ``interval`` can also be negated, for example ``-
12 hr`` represents "twelve hours in the past". Intervals also
support addition, subtraction, multiplication, division, and
comparison operations.
.. bro:type:: string .. bro:type:: string
A type used to hold character-string values which represent text.
String constants are created by enclosing text in double quotes (")
and the backslash character (\) introduces escape sequences.
Note that Bro represents strings internally as a count and vector of
bytes rather than a NUL-terminated byte string (although string
constants are also automatically NUL-terminated). This is because
network traffic can easily introduce NULs into strings either by
nature of an application, inadvertently, or maliciously. And while
NULs are allowed in Bro strings, when present in strings passed as
arguments to many functions, a run-time error can occur as their
presence likely indicates a sort of problem. In that case, the
string will also only be represented to the user as the literal
"<string-with-NUL>" string.
.. bro:type:: pattern .. bro:type:: pattern
A type representing regular-expression patterns which can be used
for fast text-searching operations. Pattern constants are created
by enclosing text within forward slashes (/) and is the same syntax
as the patterns supported by the `flex lexical analyzer
<http://flex.sourceforge.net/manual/Patterns.html>`_. The speed of
regular expression matching does not depend on the complexity or
size of the patterns. Patterns support two types of matching, exact
and embedded.
In exact matching the ``==`` equality relational operator is used
with one :bro:type:`string` operand and one :bro:type:`pattern`
operand to check whether the full string exactly matches the
pattern. In this case, the ``^`` beginning-of-line and ``$``
end-of-line anchors are redundant since pattern is implicitly
anchored to the beginning and end of the line to facilitate an exact
match. For example::
"foo" == /foo|bar/
yields true, while::
/foo|bar/ == "foobar"
yields false. The ``!=`` operator would yield the negation of ``==``.
In embedded matching the ``in`` operator is again used with one
:bro:type:`string` operand and one :bro:type:`pattern` operand
(which must be on the left-hand side), but tests whether the pattern
appears anywhere within the given string. For example::
/foo|bar/ in "foobar"
yields true, while::
/^oob/ in "foobar"
is false since "oob" does not appear at the start of "foobar". The
``!in`` operator would yield the negation of ``in``.
.. bro:type:: enum .. bro:type:: enum
A type allowing the specification of a set of related values that
have no further structure. The only operations allowed on
enumerations are equality comparisons and they do not have
associated values or ordering. An example declaration:
.. code:: bro
type color: enum { Red, White, Blue, };
The last comma is after ``Blue`` is optional.
.. bro:type:: timer .. bro:type:: timer
.. TODO: is this a type that's exposed to users?
.. bro:type:: port .. bro:type:: port
A type representing transport-level port numbers. Besides TCP and
UDP ports, there is a concept of an ICMP "port" where the source
port is the ICMP message type and the destination port the ICMP
message code. A ``port`` constant is written as an unsigned integer
followed by one of ``/tcp``, ``/udp``, ``/icmp``, or ``/unknown``.
Ports can be compared for equality and also for ordering. When
comparing order across transport-level protocols, ``/unknown`` <
``/tcp`` < ``/udp`` < ``icmp``, for example ``65535/tcp`` is smaller
than ``0/udp``.
.. bro:type:: addr .. bro:type:: addr
.. bro:type:: net A type representing an IP address. Currently, Bro defaults to only
supporting IPv4 addresses unless configured/built with
``--enable-brov6``, in which case, IPv6 addresses are supported.
IPv4 address constants are written in "dotted quad" format,
``A1.A2.A3.A4``, where Ai all lie between 0 and 255.
IPv6 address constants are written as colon-separated hexadecimal form
as described by :rfc:`2373`.
Hostname constants can also be used, but since a hostname can
correspond to multiple IP addresses, the type of such variable is a
:bro:type:`set` of :bro:type:`addr` elements. For example:
.. code:: bro
local a = www.google.com;
Addresses can be compared for (in)equality using ``==`` and ``!=``.
They can also be masked with ``/`` to produce a :bro:type:`subnet`:
.. code:: bro
local a: addr = 192.168.1.100;
local s: subnet = 192.168.0.0/16;
if ( a/16 == s )
print "true";
And checked for inclusion within a :bro:type:`subnet` using ``in`` :
.. code:: bro
local a: addr = 192.168.1.100;
local s: subnet = 192.168.0.0/16;
if ( a in s )
print "true";
.. bro:type:: subnet .. bro:type:: subnet
A type representing a block of IP addresses in CIDR notation. A
``subnet`` constant is written as an :bro:type:`addr` followed by a
slash (/) and then the network prefix size specified as a decimal
number. For example, ``192.168.0.0/16``.
.. bro:type:: any .. bro:type:: any
Used to bypass strong typing. For example, a function can take an
argument of type ``any`` when it may be of different types.
.. bro:type:: table .. bro:type:: table
.. bro:type:: union An associate array that maps from one set of values to another. The
values being mapped are termed the *index* or *indices* and the
result of the mapping is called the *yield*. Indexing into tables
is very efficient, and internally it is just a single hash table
lookup.
.. bro:type:: record The table declaration syntax is::
.. bro:type:: types table [ type^+ ] of type
.. bro:type:: func where *type^+* is one or more types, separated by commas. For example:
.. bro:type:: file .. code:: bro
.. bro:type:: vector global a: table[count] of string;
.. TODO: below are kind of "special cases" that bro knows about? declares a table indexed by :bro:type:`count` values and yielding
:bro:type:`string` values. The yield type can also be more complex:
.. code:: bro
global a: table[count] of table[addr, port] of string;
which declared a table indexed by :bro:type:`count` and yielding
another :bro:type:`table` which is indexed by an :bro:type:`addr`
and :bro:type:`port` to yield a :bro:type:`string`.
Initialization of tables occurs by enclosing a set of initializers within
braces, for example:
.. code:: bro
global t: table[count] of string = {
[11] = "eleven",
[5] = "five",
};
Accessing table elements if provided by enclosing values within square
brackets (``[]``), for example:
.. code:: bro
t[13] = "thirteen";
And membership can be tested with ``in``:
.. code:: bro
if ( 13 in t )
...
Iterate over tables with a ``for`` loop:
.. code:: bro
local t: table[count] of string;
for ( n in t )
...
local services: table[addr, port] of string;
for ( [a, p] in services )
...
Remove individual table elements with ``delete``:
.. code:: bro
delete t[13];
Nothing happens if the element with value ``13`` isn't present in
the table.
Table size can be obtained by placing the table identifier between
vertical pipe (|) characters:
.. code:: bro
|t|
.. bro:type:: set .. bro:type:: set
A set is like a :bro:type:`table`, but it is a collection of indices
that do not map to any yield value. They are declared with the
syntax::
set [ type^+ ]
where *type^+* is one or more types separated by commas.
Sets are initialized by listing elements enclosed by curly braces:
.. code:: bro
global s: set[port] = { 21/tcp, 23/tcp, 80/tcp, 443/tcp };
global s2: set[port, string] = { [21/tcp, "ftp"], [23/tcp, "telnet"] };
The types are explicitly shown in the example above, but they could
have been left to type inference.
Set membership is tested with ``in``:
.. code:: bro
if ( 21/tcp in s )
...
Elements are added with ``add``:
.. code:: bro
add s[22/tcp];
And removed with ``delete``:
.. code:: bro
delete s[21/tcp];
Set size can be obtained by placing the set identifier between
vertical pipe (|) characters:
.. code:: bro
|s|
.. bro:type:: vector
A vector is like a :bro:type:`table`, except it's always indexed by a
:bro:type:`count`. A vector is declared like:
.. code:: bro
global v: vector of string;
And can be initialized with the vector constructor:
.. code:: bro
global v: vector of string = vector("one", "two", "three");
Adding an element to a vector involves accessing/assigning it:
.. code:: bro
v[3] = "four"
Note how the vector indexing is 0-based.
Vector size can be obtained by placing the vector identifier between
vertical pipe (|) characters:
.. code:: bro
|v|
.. bro:type:: record
A ``record`` is a collection of values. Each value has a field name
and a type. Values do not need to have the same type and the types
have no restrictions. An example record type definition:
.. code:: bro
type MyRecordType: record {
c: count;
s: string &optional;
};
Access to a record field uses the dollar sign (``$``) operator:
.. code:: bro
global r: MyRecordType;
r$c = 13;
Record assignment can be done field by field or as a whole like:
.. code:: bro
r = [$c = 13, $s = "thirteen"];
When assigning a whole record value, all fields that are not
:bro:attr:`&optional` or have a :bro:attr:`&default` attribute must
be specified.
To test for existence of field that is :bro:attr:`&optional`, use the
``?$`` operator:
.. code:: bro
if ( r?$s )
...
.. bro:type:: file
Bro supports writing to files, but not reading from them. For
example, declare, open, and write to a file and finally close it
like:
.. code:: bro
global f: file = open("myfile");
print f, "hello, world";
close(f);
Writing to files like this for logging usually isn't recommend, for better
logging support see :doc:`/logging`.
.. bro:type:: func
See :bro:type:`function`.
.. bro:type:: function .. bro:type:: function
Function types in Bro are declared using::
function( argument* ): type
where *argument* is a (possibly empty) comma-separated list of
arguments, and *type* is an optional return type. For example:
.. code:: bro
global greeting: function(name: string): string;
Here ``greeting`` is an identifier with a certain function type.
The function body is not defined yet and ``greeting`` could even
have different function body values at different times. To define
a function including a body value, the syntax is like:
.. code:: bro
function greeting(name: string): string
{
return "Hello, " + name;
}
Note that in the definition above, it's not necessary for us to have
done the first (forward) declaration of ``greeting`` as a function
type, but when it is, the argument list and return type much match
exactly.
Function types don't need to have a name and can be assigned anonymously:
.. code:: bro
greeting = function(name: string): string { return "Hi, " + name; };
And finally, the function can be called like:
.. code:: bro
print greeting("Dave");
.. bro:type:: event .. bro:type:: event
Event handlers are nearly identical in both syntax and semantics to
a :bro:type:`function`, with the two differences being that event
handlers have no return type since they never return a value, and
you cannot call an event handler. Instead of directly calling an
event handler from a script, event handler bodies are executed when
they are invoked by one of three different methods:
- From the event engine
When the event engine detects an event for which you have
defined a corresponding event handler, it queues an event for
that handler. The handler is invoked as soon as the event
engine finishes processing the current packet and flushing the
invocation of other event handlers that were queued first.
- With the ``event`` statement from a script
Immediately queuing invocation of an event handler occurs like:
.. code:: bro
event password_exposed(user, password);
This assumes that ``password_exposed`` was previously declared
as an event handler type with compatible arguments.
- Via the ``schedule`` expression in a script
This delays the invocation of event handlers until some time in
the future. For example:
.. code:: bro
schedule 5 secs { password_exposed(user, password) };
Multiple event handler bodies can be defined for the same event handler
identifier and the body of each will be executed in turn. Ordering
of execution can be influenced with :bro:attr:`&priority`.
Attributes Attributes
---------- ----------
The Bro scripting language supports the following built-in attributes. Attributes occur at the end of type/event declarations and change their
behavior. The syntax is ``&key`` or ``&key=val``, e.g., ``type T:
.. TODO: add documentation set[count] &read_expire=5min`` or ``event foo() &priority=-3``. The Bro
scripting language supports the following built-in attributes.
.. bro:attr:: &optional .. bro:attr:: &optional
Allows record field to be missing. For example the type ``record {
a: int, b: port &optional }`` could be instantiated both as
singleton ``[$a=127.0.0.1]`` or pair ``[$a=127.0.0.1, $b=80/tcp]``.
.. bro:attr:: &default .. bro:attr:: &default
Uses a default value for a record field or container elements. For
example, ``table[int] of string &default="foo" }`` would create
table that returns The :bro:type:`string` ``"foo"`` for any
non-existing index.
.. bro:attr:: &redef .. bro:attr:: &redef
Allows for redefinition of initial object values. This is typically
used with constants, for example, ``const clever = T &redef;`` would
allow the constant to be redifined at some later point during script
execution.
.. bro:attr:: &rotate_interval .. bro:attr:: &rotate_interval
Rotates a file after a specified interval.
.. bro:attr:: &rotate_size .. bro:attr:: &rotate_size
Rotates af file after it has reached a given size in bytes.
.. bro:attr:: &add_func .. bro:attr:: &add_func
.. TODO: needs to be documented.
.. bro:attr:: &delete_func .. bro:attr:: &delete_func
.. TODO: needs to be documented.
.. bro:attr:: &expire_func .. bro:attr:: &expire_func
Called right before a container element expires.
.. bro:attr:: &read_expire .. bro:attr:: &read_expire
Specifies a read expiration timeout for container elements. That is,
the element expires after the given amount of time since the last
time it has been read. Note that a write also counts as a read.
.. bro:attr:: &write_expire .. bro:attr:: &write_expire
Specifies a write expiration timeout for container elements. That
is, the element expires after the given amount of time since the
last time it has been written.
.. bro:attr:: &create_expire .. bro:attr:: &create_expire
Specifies a creation expiration timeout for container elements. That
is, the element expires after the given amount of time since it has
been inserted into the container, regardless of any reads or writes.
.. bro:attr:: &persistent .. bro:attr:: &persistent
Makes a variable persistent, i.e., its value is writen to disk (per
default at shutdown time).
.. bro:attr:: &synchronized .. bro:attr:: &synchronized
Synchronizes variable accesses across nodes. The value of a
``&synchronized`` variable is automatically propagated to all peers
when it changes.
.. bro:attr:: &postprocessor .. bro:attr:: &postprocessor
.. TODO: needs to be documented.
.. bro:attr:: &encrypt .. bro:attr:: &encrypt
Encrypts files right before writing them to disk.
.. TODO: needs to be documented in more detail.
.. bro:attr:: &match .. bro:attr:: &match
.. TODO: needs to be documented.
.. bro:attr:: &disable_print_hook .. bro:attr:: &disable_print_hook
Deprecated. Will be removed.
.. bro:attr:: &raw_output .. bro:attr:: &raw_output
Opens a file in raw mode, i.e., non-ASCII characters are not
escaped.
.. bro:attr:: &mergeable .. bro:attr:: &mergeable
Prefers set union to assignment for synchronized state. This
attribute is used in conjunction with :bro:attr:`&synchronized`
container types: when the same container is updated at two peers
with different value, the propagation of the state causes a race
condition, where the last update succeeds. This can cause
inconsistencies and can be avoided by unifying the two sets, rather
than merely overwriting the old value.
.. bro:attr:: &priority .. bro:attr:: &priority
Specifies the execution priority of an event handler. Higher values
are executed before lower ones. The default value is 0.
.. bro:attr:: &group .. bro:attr:: &group
Groups event handlers such that those in the same group can be
jointly activated or deactivated.
.. bro:attr:: &log .. bro:attr:: &log
Writes a record field to the associated log stream.
.. bro:attr:: &error_handler
.. TODO: needs documented
.. bro:attr:: (&tracked) .. bro:attr:: (&tracked)
.. TODO: needs documented or removed if it's not used anywhere.

View file

@ -34,7 +34,7 @@ Let's look at an example signature first:
This signature asks Bro to match the regular expression ``.*root`` on This signature asks Bro to match the regular expression ``.*root`` on
all TCP connections going to port 80. When the signature triggers, Bro all TCP connections going to port 80. When the signature triggers, Bro
will raise an event ``signature_match`` of the form: will raise an event :bro:id:`signature_match` of the form:
.. code:: bro .. code:: bro
@ -45,20 +45,20 @@ triggered the match, ``msg`` is the string specified by the
signature's event statement (``Found root!``), and data is the last signature's event statement (``Found root!``), and data is the last
piece of payload which triggered the pattern match. piece of payload which triggered the pattern match.
To turn such ``signature_match`` events into actual alarms, you can To turn such :bro:id:`signature_match` events into actual alarms, you can
load Bro's ``signature.bro`` script. This script contains a default load Bro's :doc:`/scripts/base/frameworks/signatures/main` script.
event handler that raises ``SensitiveSignature`` :doc:`Notices <notice>` This script contains a default event handler that raises
:bro:enum:`Signatures::Sensitive_Signature` :doc:`Notices <notice>`
(as well as others; see the beginning of the script). (as well as others; see the beginning of the script).
As signatures are independent of Bro's policy scripts, they are put As signatures are independent of Bro's policy scripts, they are put
into their own file(s). There are two ways to specify which files into their own file(s). There are two ways to specify which files
contain signatures: By using the ``-s`` flag when you invoke Bro, or contain signatures: By using the ``-s`` flag when you invoke Bro, or
by extending the Bro variable ``signatures_files`` using the ``+=`` by extending the Bro variable :bro:id:`signature_files` using the ``+=``
operator. If a signature file is given without a path, it is searched operator. If a signature file is given without a path, it is searched
along the normal ``BROPATH``. The default extension of the file name along the normal ``BROPATH``. The default extension of the file name
is ``.sig``, and Bro appends that automatically when neccesary. is ``.sig``, and Bro appends that automatically when neccesary.
Signature language Signature language
================== ==================
@ -90,7 +90,7 @@ one of ``==``, ``!=``, ``<``, ``<=``, ``>``, ``>=``; and
against. The following keywords are defined: against. The following keywords are defined:
``src-ip``/``dst-ip <cmp> <address-list>`` ``src-ip``/``dst-ip <cmp> <address-list>``
Source and destination address, repectively. Addresses can be Source and destination address, respectively. Addresses can be
given as IP addresses or CIDR masks. given as IP addresses or CIDR masks.
``src-port``/``dst-port`` ``<int-list>`` ``src-port``/``dst-port`` ``<int-list>``
@ -126,7 +126,7 @@ CIDR notation for netmasks and is translated into a corresponding
bitmask applied to the packet's value prior to the comparison (similar bitmask applied to the packet's value prior to the comparison (similar
to the optional ``& integer``). to the optional ``& integer``).
Putting all together, this is an example conditiation that is Putting all together, this is an example condition that is
equivalent to ``dst- ip == 1.2.3.4/16, 5.6.7.8/24``: equivalent to ``dst- ip == 1.2.3.4/16, 5.6.7.8/24``:
.. code:: bro-sig .. code:: bro-sig
@ -134,7 +134,7 @@ equivalent to ``dst- ip == 1.2.3.4/16, 5.6.7.8/24``:
header ip[16:4] == 1.2.3.4/16, 5.6.7.8/24 header ip[16:4] == 1.2.3.4/16, 5.6.7.8/24
Internally, the predefined header conditions are in fact just Internally, the predefined header conditions are in fact just
short-cuts and mappend into a generic condition. short-cuts and mapped into a generic condition.
Content Conditions Content Conditions
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
@ -265,7 +265,7 @@ Actions define what to do if a signature matches. Currently, there are
two actions defined: two actions defined:
``event <string>`` ``event <string>``
Raises a ``signature_match`` event. The event handler has the Raises a :bro:id:`signature_match` event. The event handler has the
following type: following type:
.. code:: bro .. code:: bro
@ -339,10 +339,10 @@ Things to keep in mind when writing signatures
respectively. Generally, Bro follows `flex's regular expression respectively. Generally, Bro follows `flex's regular expression
syntax syntax
<http://www.gnu.org/software/flex/manual/html_chapter/flex_7.html>`_. <http://www.gnu.org/software/flex/manual/html_chapter/flex_7.html>`_.
See the DPD signatures in ``policy/sigs/dpd.bro`` for some examples See the DPD signatures in ``base/frameworks/dpd/dpd.sig`` for some examples
of fairly complex payload patterns. of fairly complex payload patterns.
* The data argument of the ``signature_match`` handler might not carry * The data argument of the :bro:id:`signature_match` handler might not carry
the full text matched by the regular expression. Bro performs the the full text matched by the regular expression. Bro performs the
matching incrementally as packets come in; when the signature matching incrementally as packets come in; when the signature
eventually fires, it can only pass on the most recent chunk of data. eventually fires, it can only pass on the most recent chunk of data.

View file

@ -9,10 +9,10 @@ redef peer_description = Cluster::node;
# Add a cluster prefix. # Add a cluster prefix.
@prefixes += cluster @prefixes += cluster
## If this script isn't found anywhere, the cluster bombs out. # If this script isn't found anywhere, the cluster bombs out.
## Loading the cluster framework requires that a script by this name exists # Loading the cluster framework requires that a script by this name exists
## somewhere in the BROPATH. The only thing in the file should be the # somewhere in the BROPATH. The only thing in the file should be the
## cluster definition in the :bro:id:`Cluster::nodes` variable. # cluster definition in the :bro:id:`Cluster::nodes` variable.
@load cluster-layout @load cluster-layout
@if ( Cluster::node in Cluster::nodes ) @if ( Cluster::node in Cluster::nodes )

View file

@ -1,21 +1,45 @@
##! A framework for establishing and controlling a cluster of Bro instances.
##! In order to use the cluster framework, a script named
##! ``cluster-layout.bro`` must exist somewhere in Bro's script search path
##! which has a cluster definition of the :bro:id:`Cluster::nodes` variable.
##! The ``CLUSTER_NODE`` environment variable or :bro:id:`Cluster::node`
##! must also be sent and the cluster framework loaded as a package like
##! ``@load base/frameworks/cluster``.
@load base/frameworks/control @load base/frameworks/control
module Cluster; module Cluster;
export { export {
## The cluster logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the column fields of the cluster log.
type Info: record { type Info: record {
## The time at which a cluster message was generated.
ts: time; ts: time;
## A message indicating information about the cluster's operation.
message: string; message: string;
} &log; } &log;
## Types of nodes that are allowed to participate in the cluster
## configuration.
type NodeType: enum { type NodeType: enum {
## A dummy node type indicating the local node is not operating
## within a cluster.
NONE, NONE,
## A node type which is allowed to view/manipulate the configuration
## of other nodes in the cluster.
CONTROL, CONTROL,
## A node type responsible for log and policy management.
MANAGER, MANAGER,
## A node type for relaying worker node communication and synchronizing
## worker node state.
PROXY, PROXY,
## The node type doing all the actual traffic analysis.
WORKER, WORKER,
## A node acting as a traffic recorder using the
## `Time Machine <http://tracker.bro-ids.org/time-machine>`_ software.
TIME_MACHINE, TIME_MACHINE,
}; };
@ -49,30 +73,38 @@ export {
## Record type to indicate a node in a cluster. ## Record type to indicate a node in a cluster.
type Node: record { type Node: record {
## Identifies the type of cluster node in this node's configuration.
node_type: NodeType; node_type: NodeType;
## The IP address of the cluster node.
ip: addr; ip: addr;
## The port to which the this local node can connect when
## establishing communication.
p: port; p: port;
## Identifier for the interface a worker is sniffing. ## Identifier for the interface a worker is sniffing.
interface: string &optional; interface: string &optional;
## Name of the manager node this node uses. For workers and proxies.
## Manager node this node uses. For workers and proxies.
manager: string &optional; manager: string &optional;
## Proxy node this node uses. For workers and managers. ## Name of the proxy node this node uses. For workers and managers.
proxy: string &optional; proxy: string &optional;
## Worker nodes that this node connects with. For managers and proxies. ## Names of worker nodes that this node connects with.
## For managers and proxies.
workers: set[string] &optional; workers: set[string] &optional;
## Name of a time machine node with which this node connects.
time_machine: string &optional; time_machine: string &optional;
}; };
## This function can be called at any time to determine if the cluster ## This function can be called at any time to determine if the cluster
## framework is being enabled for this run. ## framework is being enabled for this run.
##
## Returns: True if :bro:id:`Cluster::node` has been set.
global is_enabled: function(): bool; global is_enabled: function(): bool;
## This function can be called at any time to determine what type of ## This function can be called at any time to determine what type of
## cluster node the current Bro instance is going to be acting as. ## cluster node the current Bro instance is going to be acting as.
## If :bro:id:`Cluster::is_enabled` returns false, then ## If :bro:id:`Cluster::is_enabled` returns false, then
## :bro:enum:`Cluster::NONE` is returned. ## :bro:enum:`Cluster::NONE` is returned.
##
## Returns: The :bro:type:`Cluster::NodeType` the calling node acts as.
global local_node_type: function(): NodeType; global local_node_type: function(): NodeType;
## This gives the value for the number of workers currently connected to, ## This gives the value for the number of workers currently connected to,

View file

@ -1,3 +1,7 @@
##! Redefines the options common to all proxy nodes within a Bro cluster.
##! In particular, proxies are not meant to produce logs locally and they
##! do not forward events anywhere, they mainly synchronize state between
##! worker nodes.
@prefixes += cluster-proxy @prefixes += cluster-proxy

View file

@ -1,3 +1,7 @@
##! Redefines some options common to all worker nodes within a Bro cluster.
##! In particular, worker nodes do not produce logs locally, instead they
##! send them off to a manager node for processing.
@prefixes += cluster-worker @prefixes += cluster-worker
## Don't do any local logging. ## Don't do any local logging.

View file

@ -1,3 +1,6 @@
##! This script establishes communication among all nodes in a cluster
##! as defined by :bro:id:`Cluster::nodes`.
@load ./main @load ./main
@load base/frameworks/communication @load base/frameworks/communication

View file

@ -1,11 +1,13 @@
##! Connect to remote Bro or Broccoli instances to share state and/or transfer ##! Facilitates connecting to remote Bro or Broccoli instances to share state
##! events. ##! and/or transfer events.
@load base/frameworks/packet-filter @load base/frameworks/packet-filter
module Communication; module Communication;
export { export {
## The communication logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## Which interface to listen on (0.0.0.0 for any interface). ## Which interface to listen on (0.0.0.0 for any interface).
@ -21,14 +23,25 @@ export {
## compression. ## compression.
global compression_level = 0 &redef; global compression_level = 0 &redef;
## A record type containing the column fields of the communication log.
type Info: record { type Info: record {
## The network time at which a communication event occurred.
ts: time &log; ts: time &log;
## The peer name (if any) for which a communication event is concerned.
peer: string &log &optional; peer: string &log &optional;
## Where the communication event message originated from, that is,
## either from the scripting layer or inside the Bro process.
src_name: string &log &optional; src_name: string &log &optional;
## .. todo:: currently unused.
connected_peer_desc: string &log &optional; connected_peer_desc: string &log &optional;
## .. todo:: currently unused.
connected_peer_addr: addr &log &optional; connected_peer_addr: addr &log &optional;
## .. todo:: currently unused.
connected_peer_port: port &log &optional; connected_peer_port: port &log &optional;
## The severity of the communication event message.
level: string &log &optional; level: string &log &optional;
## A message describing the communication event between Bro or
## Broccoli instances.
message: string &log; message: string &log;
}; };
@ -77,7 +90,7 @@ export {
auth: bool &default = F; auth: bool &default = F;
## If not set, no capture filter is sent. ## If not set, no capture filter is sent.
## If set to "", the default cature filter is sent. ## If set to "", the default capture filter is sent.
capture_filter: string &optional; capture_filter: string &optional;
## Whether to use SSL-based communication. ## Whether to use SSL-based communication.
@ -97,10 +110,24 @@ export {
## to or respond to connections from. ## to or respond to connections from.
global nodes: table[string] of Node &redef; global nodes: table[string] of Node &redef;
## A table of peer nodes for which this node issued a
## :bro:id:`Communication::connect_peer` call but with which a connection
## has not yet been established or with which a connection has been
## closed and is currently in the process of retrying to establish.
## When a connection is successfully established, the peer is removed
## from the table.
global pending_peers: table[peer_id] of Node; global pending_peers: table[peer_id] of Node;
## A table of peer nodes for which this node has an established connection.
## Peers are automatically removed if their connection is closed and
## automatically added back if a connection is re-established later.
global connected_peers: table[peer_id] of Node; global connected_peers: table[peer_id] of Node;
## Connect to nodes[node], independent of its "connect" flag. ## Connect to a node in :bro:id:`Communication::nodes` independent
## of its "connect" flag.
##
## peer: the string used to index a particular node within the
## :bro:id:`Communication::nodes` table.
global connect_peer: function(peer: string); global connect_peer: function(peer: string);
} }

View file

@ -1,43 +1,30 @@
##! This is a utility script that sends the current values of all &redef'able ##! The control framework provides the foundation for providing "commands"
##! consts to a remote Bro then sends the :bro:id:`configuration_update` event ##! that can be taken remotely at runtime to modify a running Bro instance
##! and terminates processing. ##! or collect information from the running instance.
##!
##! Intended to be used from the command line like this when starting a controller::
##!
##! bro <scripts> frameworks/control/controller Control::host=<host_addr> Control::port=<host_port> Control::cmd=<command> [Control::arg=<arg>]
##!
##! A controllee only needs to load the controllee script in addition
##! to the specific analysis scripts desired. It may also need a node
##! configured as a controller node in the communications nodes configuration::
##!
##! bro <scripts> frameworks/control/controllee
##!
##! To use the framework as a controllee, it only needs to be loaded and
##! the controlled node need to accept all events in the "Control::" namespace
##! from the host where the control actions will be performed from along with
##! using the "control" class.
module Control; module Control;
export { export {
## This is the address of the host that will be controlled. ## The address of the host that will be controlled.
const host = 0.0.0.0 &redef; const host = 0.0.0.0 &redef;
## This is the port of the host that will be controlled. ## The port of the host that will be controlled.
const host_port = 0/tcp &redef; const host_port = 0/tcp &redef;
## This is the command that is being done. It's typically set on the ## The command that is being done. It's typically set on the
## command line and influences whether this instance starts up as a ## command line.
## controller or controllee.
const cmd = "" &redef; const cmd = "" &redef;
## This can be used by commands that take an argument. ## This can be used by commands that take an argument.
const arg = "" &redef; const arg = "" &redef;
## Events that need to be handled by controllers.
const controller_events = /Control::.*_request/ &redef; const controller_events = /Control::.*_request/ &redef;
## Events that need to be handled by controllees.
const controllee_events = /Control::.*_response/ &redef; const controllee_events = /Control::.*_response/ &redef;
## These are the commands that can be given on the command line for ## The commands that can currently be given on the command line for
## remote control. ## remote control.
const commands: set[string] = { const commands: set[string] = {
"id_value", "id_value",
@ -45,15 +32,15 @@ export {
"net_stats", "net_stats",
"configuration_update", "configuration_update",
"shutdown", "shutdown",
}; } &redef;
## Variable IDs that are to be ignored by the update process. ## Variable IDs that are to be ignored by the update process.
const ignore_ids: set[string] = { const ignore_ids: set[string] = { };
};
## Event for requesting the value of an ID (a variable). ## Event for requesting the value of an ID (a variable).
global id_value_request: event(id: string); global id_value_request: event(id: string);
## Event for returning the value of an ID after an :bro:id:`id_request` event. ## Event for returning the value of an ID after an
## :bro:id:`Control::id_value_request` event.
global id_value_response: event(id: string, val: string); global id_value_response: event(id: string, val: string);
## Requests the current communication status. ## Requests the current communication status.
@ -68,7 +55,8 @@ export {
## Inform the remote Bro instance that it's configuration may have been updated. ## Inform the remote Bro instance that it's configuration may have been updated.
global configuration_update_request: event(); global configuration_update_request: event();
## This event is a wrapper and alias for the :bro:id:`configuration_update_request` event. ## This event is a wrapper and alias for the
## :bro:id:`Control::configuration_update_request` event.
## This event is also a primary hooking point for the control framework. ## This event is also a primary hooking point for the control framework.
global configuration_update: event(); global configuration_update: event();
## Message in response to a configuration update request. ## Message in response to a configuration update request.

View file

@ -7,14 +7,16 @@ module DPD;
redef signature_files += "base/frameworks/dpd/dpd.sig"; redef signature_files += "base/frameworks/dpd/dpd.sig";
export { export {
## Add the DPD logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type defining the columns to log in the DPD logging stream.
type Info: record { type Info: record {
## Timestamp for when protocol analysis failed. ## Timestamp for when protocol analysis failed.
ts: time &log; ts: time &log;
## Connection unique ID. ## Connection unique ID.
uid: string &log; uid: string &log;
## Connection ID. ## Connection ID containing the 4-tuple which identifies endpoints.
id: conn_id &log; id: conn_id &log;
## Transport protocol for the violation. ## Transport protocol for the violation.
proto: transport_proto &log; proto: transport_proto &log;

View file

@ -11,7 +11,7 @@
# user_name # user_name
# file_name # file_name
# file_md5 # file_md5
# x509_cert - DER encoded, not PEM (ascii armored) # x509_md5
# Example tags: # Example tags:
# infrastructure # infrastructure
@ -25,6 +25,7 @@
module Intel; module Intel;
export { export {
## The intel logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
redef enum Notice::Type += { redef enum Notice::Type += {
@ -33,71 +34,116 @@ export {
Detection, Detection,
}; };
## Record type used for logging information from the intelligence framework.
## Primarily for problems or oddities with inserting and querying data.
## This is important since the content of the intelligence framework can
## change quite dramatically during runtime and problems may be introduced
## into the data.
type Info: record { type Info: record {
## The current network time.
ts: time &log; ts: time &log;
## Represents the severity of the message.
## This value should be one of: "info", "warn", "error" ## This value should be one of: "info", "warn", "error"
level: string &log; level: string &log;
## The message.
message: string &log; message: string &log;
}; };
## Record to represent metadata associated with a single piece of
## intelligence.
type MetaData: record { type MetaData: record {
## A description for the data.
desc: string &optional; desc: string &optional;
## A URL where more information may be found about the intelligence.
url: string &optional; url: string &optional;
## The time at which the data was first declared to be intelligence.
first_seen: time &optional; first_seen: time &optional;
## When this data was most recent inserted into the framework.
latest_seen: time &optional; latest_seen: time &optional;
## Arbitrary text tags for the data.
tags: set[string]; tags: set[string];
}; };
## Record to represent a singular piece of intelligence.
type Item: record { type Item: record {
## If the data is an IP address, this hold the address.
ip: addr &optional; ip: addr &optional;
## If the data is textual, this holds the text.
str: string &optional; str: string &optional;
## If the data is numeric, this holds the number.
num: int &optional; num: int &optional;
## The subtype of the data for when either the $str or $num fields are
## given. If one of those fields are given, this field must be present.
subtype: string &optional; subtype: string &optional;
## The next five fields are temporary until a better model for
## attaching metadata to an intelligence item is created.
desc: string &optional; desc: string &optional;
url: string &optional; url: string &optional;
first_seen: time &optional; first_seen: time &optional;
latest_seen: time &optional; latest_seen: time &optional;
tags: set[string]; tags: set[string];
## These single string tags are throw away until pybroccoli supports sets ## These single string tags are throw away until pybroccoli supports sets.
tag1: string &optional; tag1: string &optional;
tag2: string &optional; tag2: string &optional;
tag3: string &optional; tag3: string &optional;
}; };
## Record model used for constructing queries against the intelligence
## framework.
type QueryItem: record { type QueryItem: record {
## If an IP address is being queried for, this field should be given.
ip: addr &optional; ip: addr &optional;
## If a string is being queried for, this field should be given.
str: string &optional; str: string &optional;
## If numeric data is being queried for, this field should be given.
num: int &optional; num: int &optional;
## If either a string or number is being queried for, this field should
## indicate the subtype of the data.
subtype: string &optional; subtype: string &optional;
## A set of tags where if a single metadata record attached to an item
## has any one of the tags defined in this field, it will match.
or_tags: set[string] &optional; or_tags: set[string] &optional;
## A set of tags where a single metadata record attached to an item
## must have all of the tags defined in this field.
and_tags: set[string] &optional; and_tags: set[string] &optional;
## The predicate can be given when searching for a match. It will ## The predicate can be given when searching for a match. It will
## be tested against every :bro:type:`MetaData` item associated with ## be tested against every :bro:type:`Intel::MetaData` item associated
## the data being matched on. If it returns T a single time, the ## with the data being matched on. If it returns T a single time, the
## matcher will consider that the item has matched. ## matcher will consider that the item has matched. This field can
## be used for constructing arbitrarily complex queries that may not
## be possible with the $or_tags or $and_tags fields.
pred: function(meta: Intel::MetaData): bool &optional; pred: function(meta: Intel::MetaData): bool &optional;
}; };
## Function to insert data into the intelligence framework.
##
## item: The data item.
##
## Returns: T if the data was successfully inserted into the framework,
## otherwise it returns F.
global insert: function(item: Item): bool; global insert: function(item: Item): bool;
global insert_event: event(item: Item);
global matcher: function(item: QueryItem): bool;
type MetaDataStore: table[count] of MetaData; ## A wrapper for the :bro:id:`Intel::insert` function. This is primarily
type DataStore: record { ## used as the external API for inserting data into the intelligence
## using Broccoli.
global insert_event: event(item: Item);
## Function for matching data within the intelligence framework.
global matcher: function(item: QueryItem): bool;
}
type MetaDataStore: table[count] of MetaData;
type DataStore: record {
ip_data: table[addr] of MetaDataStore; ip_data: table[addr] of MetaDataStore;
## The first string is the actual value and the second string is the subtype. # The first string is the actual value and the second string is the subtype.
string_data: table[string, string] of MetaDataStore; string_data: table[string, string] of MetaDataStore;
int_data: table[int, string] of MetaDataStore; int_data: table[int, string] of MetaDataStore;
}; };
global data_store: DataStore; global data_store: DataStore;
}
event bro_init() event bro_init()
{ {

View file

@ -1,16 +1,16 @@
##! The Bro logging interface. ##! The Bro logging interface.
##! ##!
##! See XXX for a introduction to Bro's logging framework. ##! See :doc:`/logging` for a introduction to Bro's logging framework.
module Log; module Log;
# Log::ID and Log::Writer are defined in bro.init due to circular dependencies. # Log::ID and Log::Writer are defined in types.bif due to circular dependencies.
export { export {
## If true, is local logging is by default enabled for all filters. ## If true, local logging is by default enabled for all filters.
const enable_local_logging = T &redef; const enable_local_logging = T &redef;
## If true, is remote logging is by default enabled for all filters. ## If true, remote logging is by default enabled for all filters.
const enable_remote_logging = T &redef; const enable_remote_logging = T &redef;
## Default writer to use if a filter does not specify ## Default writer to use if a filter does not specify
@ -23,21 +23,24 @@ export {
columns: any; columns: any;
## Event that will be raised once for each log entry. ## Event that will be raised once for each log entry.
## The event receives a single same parameter, an instance of type ``columns``. ## The event receives a single same parameter, an instance of type
## ``columns``.
ev: any &optional; ev: any &optional;
}; };
## Default function for building the path values for log filters if not ## Builds the default path values for log filters if not otherwise
## speficied otherwise by a filter. The default implementation uses ``id`` ## specified by a filter. The default implementation uses *id*
## to derive a name. ## to derive a name.
## ##
## id: The log stream. ## id: The ID associated with the log stream.
##
## path: A suggested path value, which may be either the filter's ## path: A suggested path value, which may be either the filter's
## ``path`` if defined, else a previous result from the function. ## ``path`` if defined, else a previous result from the function.
## If no ``path`` is defined for the filter, then the first call ## If no ``path`` is defined for the filter, then the first call
## to the function will contain an empty string. ## to the function will contain an empty string.
##
## rec: An instance of the streams's ``columns`` type with its ## rec: An instance of the streams's ``columns`` type with its
## fields set to the values to logged. ## fields set to the values to be logged.
## ##
## Returns: The path to be used for the filter. ## Returns: The path to be used for the filter.
global default_path_func: function(id: ID, path: string, rec: any) : string &redef; global default_path_func: function(id: ID, path: string, rec: any) : string &redef;
@ -46,7 +49,7 @@ export {
## Information passed into rotation callback functions. ## Information passed into rotation callback functions.
type RotationInfo: record { type RotationInfo: record {
writer: Writer; ##< Writer. writer: Writer; ##< The :bro:type:`Log::Writer` being used.
fname: string; ##< Full name of the rotated file. fname: string; ##< Full name of the rotated file.
path: string; ##< Original path value. path: string; ##< Original path value.
open: time; ##< Time when opened. open: time; ##< Time when opened.
@ -57,25 +60,26 @@ export {
## Default rotation interval. Zero disables rotation. ## Default rotation interval. Zero disables rotation.
const default_rotation_interval = 0secs &redef; const default_rotation_interval = 0secs &redef;
## Default naming format for timestamps embedded into filenames. Uses a strftime() style. ## Default naming format for timestamps embedded into filenames.
## Uses a ``strftime()`` style.
const default_rotation_date_format = "%Y-%m-%d-%H-%M-%S" &redef; const default_rotation_date_format = "%Y-%m-%d-%H-%M-%S" &redef;
## Default shell command to run on rotated files. Empty for none. ## Default shell command to run on rotated files. Empty for none.
const default_rotation_postprocessor_cmd = "" &redef; const default_rotation_postprocessor_cmd = "" &redef;
## Specifies the default postprocessor function per writer type. Entries in this ## Specifies the default postprocessor function per writer type.
## table are initialized by each writer type. ## Entries in this table are initialized by each writer type.
const default_rotation_postprocessors: table[Writer] of function(info: RotationInfo) : bool &redef; const default_rotation_postprocessors: table[Writer] of function(info: RotationInfo) : bool &redef;
## Filter customizing logging. ## A filter type describes how to customize logging streams.
type Filter: record { type Filter: record {
## Descriptive name to reference this filter. ## Descriptive name to reference this filter.
name: string; name: string;
## The writer to use. ## The logging writer implementation to use.
writer: Writer &default=default_writer; writer: Writer &default=default_writer;
## Predicate indicating whether a log entry should be recorded. ## Indicates whether a log entry should be recorded.
## If not given, all entries are recorded. ## If not given, all entries are recorded.
## ##
## rec: An instance of the streams's ``columns`` type with its ## rec: An instance of the streams's ``columns`` type with its
@ -101,13 +105,15 @@ export {
## easy to flood the disk by returning a new string for each ## easy to flood the disk by returning a new string for each
## connection ... ## connection ...
## ##
## id: The log stream. ## id: The ID associated with the log stream.
##
## path: A suggested path value, which may be either the filter's ## path: A suggested path value, which may be either the filter's
## ``path`` if defined, else a previous result from the function. ## ``path`` if defined, else a previous result from the function.
## If no ``path`` is defined for the filter, then the first call ## If no ``path`` is defined for the filter, then the first call
## to the function will contain an empty string. ## to the function will contain an empty string.
##
## rec: An instance of the streams's ``columns`` type with its ## rec: An instance of the streams's ``columns`` type with its
## fields set to the values to logged. ## fields set to the values to be logged.
## ##
## Returns: The path to be used for the filter. ## Returns: The path to be used for the filter.
path_func: function(id: ID, path: string, rec: any): string &optional; path_func: function(id: ID, path: string, rec: any): string &optional;
@ -129,27 +135,183 @@ export {
## Rotation interval. ## Rotation interval.
interv: interval &default=default_rotation_interval; interv: interval &default=default_rotation_interval;
## Callback function to trigger for rotated files. If not set, ## Callback function to trigger for rotated files. If not set, the
## the default comes out of default_rotation_postprocessors. ## default comes out of :bro:id:`Log::default_rotation_postprocessors`.
postprocessor: function(info: RotationInfo) : bool &optional; postprocessor: function(info: RotationInfo) : bool &optional;
}; };
## Sentinel value for indicating that a filter was not found when looked up. ## Sentinel value for indicating that a filter was not found when looked up.
const no_filter: Filter = [$name="<not found>"]; # Sentinel. const no_filter: Filter = [$name="<not found>"];
# TODO: Document. ## Creates a new logging stream with the default filter.
##
## id: The ID enum to be associated with the new logging stream.
##
## stream: A record defining the content that the new stream will log.
##
## Returns: True if a new logging stream was successfully created and
## a default filter added to it.
##
## .. bro:see:: Log::add_default_filter Log::remove_default_filter
global create_stream: function(id: ID, stream: Stream) : bool; global create_stream: function(id: ID, stream: Stream) : bool;
## Enables a previously disabled logging stream. Disabled streams
## will not be written to until they are enabled again. New streams
## are enabled by default.
##
## id: The ID associated with the logging stream to enable.
##
## Returns: True if the stream is re-enabled or was not previously disabled.
##
## .. bro:see:: Log::disable_stream
global enable_stream: function(id: ID) : bool; global enable_stream: function(id: ID) : bool;
## Disables a currently enabled logging stream. Disabled streams
## will not be written to until they are enabled again. New streams
## are enabled by default.
##
## id: The ID associated with the logging stream to disable.
##
## Returns: True if the stream is now disabled or was already disabled.
##
## .. bro:see:: Log::enable_stream
global disable_stream: function(id: ID) : bool; global disable_stream: function(id: ID) : bool;
## Adds a custom filter to an existing logging stream. If a filter
## with a matching ``name`` field already exists for the stream, it
## is removed when the new filter is successfully added.
##
## id: The ID associated with the logging stream to filter.
##
## filter: A record describing the desired logging parameters.
##
## Returns: True if the filter was sucessfully added, false if
## the filter was not added or the *filter* argument was not
## the correct type.
##
## .. bro:see:: Log::remove_filter Log::add_default_filter
## Log::remove_default_filter
global add_filter: function(id: ID, filter: Filter) : bool; global add_filter: function(id: ID, filter: Filter) : bool;
## Removes a filter from an existing logging stream.
##
## id: The ID associated with the logging stream from which to
## remove a filter.
##
## name: A string to match against the ``name`` field of a
## :bro:type:`Log::Filter` for identification purposes.
##
## Returns: True if the logging stream's filter was removed or
## if no filter associated with *name* was found.
##
## .. bro:see:: Log::remove_filter Log::add_default_filter
## Log::remove_default_filter
global remove_filter: function(id: ID, name: string) : bool; global remove_filter: function(id: ID, name: string) : bool;
global get_filter: function(id: ID, name: string) : Filter; # Returns no_filter if not found.
## Gets a filter associated with an existing logging stream.
##
## id: The ID associated with a logging stream from which to
## obtain one of its filters.
##
## name: A string to match against the ``name`` field of a
## :bro:type:`Log::Filter` for identification purposes.
##
## Returns: A filter attached to the logging stream *id* matching
## *name* or, if no matches are found returns the
## :bro:id:`Log::no_filter` sentinel value.
##
## .. bro:see:: Log::add_filter Log::remove_filter Log::add_default_filter
## Log::remove_default_filter
global get_filter: function(id: ID, name: string) : Filter;
## Writes a new log line/entry to a logging stream.
##
## id: The ID associated with a logging stream to be written to.
##
## columns: A record value describing the values of each field/column
## to write to the log stream.
##
## Returns: True if the stream was found and no error occurred in writing
## to it or if the stream was disabled and nothing was written.
## False if the stream was was not found, or the *columns*
## argument did not match what the stream was initially defined
## to handle, or one of the stream's filters has an invalid
## ``path_func``.
##
## .. bro:see: Log::enable_stream Log::disable_stream
global write: function(id: ID, columns: any) : bool; global write: function(id: ID, columns: any) : bool;
## Sets the buffering status for all the writers of a given logging stream.
## A given writer implementation may or may not support buffering and if it
## doesn't then toggling buffering with this function has no effect.
##
## id: The ID associated with a logging stream for which to
## enable/disable buffering.
##
## buffered: Whether to enable or disable log buffering.
##
## Returns: True if buffering status was set, false if the logging stream
## does not exist.
##
## .. bro:see:: Log::flush
global set_buf: function(id: ID, buffered: bool): bool; global set_buf: function(id: ID, buffered: bool): bool;
## Flushes any currently buffered output for all the writers of a given
## logging stream.
##
## id: The ID associated with a logging stream for which to flush buffered
## data.
##
## Returns: True if all writers of a log stream were signalled to flush
## buffered data or if the logging stream is disabled,
## false if the logging stream does not exist.
##
## .. bro:see:: Log::set_buf Log::enable_stream Log::disable_stream
global flush: function(id: ID): bool; global flush: function(id: ID): bool;
## Adds a default :bro:type:`Log::Filter` record with ``name`` field
## set as "default" to a given logging stream.
##
## id: The ID associated with a logging stream for which to add a default
## filter.
##
## Returns: The status of a call to :bro:id:`Log::add_filter` using a
## default :bro:type:`Log::Filter` argument with ``name`` field
## set to "default".
##
## .. bro:see:: Log::add_filter Log::remove_filter
## Log::remove_default_filter
global add_default_filter: function(id: ID) : bool; global add_default_filter: function(id: ID) : bool;
## Removes the :bro:type:`Log::Filter` with ``name`` field equal to
## "default".
##
## id: The ID associated with a logging stream from which to remove the
## default filter.
##
## Returns: The status of a call to :bro:id:`Log::remove_filter` using
## "default" as the argument.
##
## .. bro:see:: Log::add_filter Log::remove_filter Log::add_default_filter
global remove_default_filter: function(id: ID) : bool; global remove_default_filter: function(id: ID) : bool;
## Runs a command given by :bro:id:`Log::default_rotation_postprocessor_cmd`
## on a rotated file. Meant to be called from postprocessor functions
## that are added to :bro:id:`Log::default_rotation_postprocessors`.
##
## info: A record holding meta-information about the log being rotated.
##
## npath: The new path of the file (after already being rotated/processed
## by writer-specific postprocessor as defined in
## :bro:id:`Log::default_rotation_postprocessors`.
##
## Returns: True when :bro:id:`Log::default_rotation_postprocessor_cmd`
## is empty or the system command given by it has been invoked
## to postprocess a rotated log file.
##
## .. bro:see:: Log::default_rotation_date_format
## Log::default_rotation_postprocessor_cmd
## Log::default_rotation_postprocessors
global run_rotation_postprocessor_cmd: function(info: RotationInfo, npath: string) : bool; global run_rotation_postprocessor_cmd: function(info: RotationInfo, npath: string) : bool;
} }

View file

@ -1,29 +1,51 @@
##! This script defines a postprocessing function that can be applied ##! This script defines a postprocessing function that can be applied
##! to a logging filter in order to automatically SCP (secure copy) ##! to a logging filter in order to automatically SCP (secure copy)
##! a log stream (or a subset of it) to a remote host at configurable ##! a log stream (or a subset of it) to a remote host at configurable
##! rotation time intervals. ##! rotation time intervals. Generally, to use this functionality
##! you must handle the :bro:id:`bro_init` event and do the following
##! in your handler:
##!
##! 1) Create a new :bro:type:`Log::Filter` record that defines a name/path,
##! rotation interval, and set the ``postprocessor`` to
##! :bro:id:`Log::scp_postprocessor`.
##! 2) Add the filter to a logging stream using :bro:id:`Log::add_filter`.
##! 3) Add a table entry to :bro:id:`Log::scp_destinations` for the filter's
##! writer/path pair which defines a set of :bro:type:`Log::SCPDestination`
##! records.
module Log; module Log;
export { export {
## This postprocessor SCP's the rotated-log to all the remote hosts ## Secure-copies the rotated-log to all the remote hosts
## defined in :bro:id:`Log::scp_destinations` and then deletes ## defined in :bro:id:`Log::scp_destinations` and then deletes
## the local copy of the rotated-log. It's not active when ## the local copy of the rotated-log. It's not active when
## reading from trace files. ## reading from trace files.
##
## info: A record holding meta-information about the log file to be
## postprocessed.
##
## Returns: True if secure-copy system command was initiated or
## if no destination was configured for the log as described
## by *info*.
global scp_postprocessor: function(info: Log::RotationInfo): bool; global scp_postprocessor: function(info: Log::RotationInfo): bool;
## A container that describes the remote destination for the SCP command ## A container that describes the remote destination for the SCP command
## argument as ``user@host:path``. ## argument as ``user@host:path``.
type SCPDestination: record { type SCPDestination: record {
## The remote user to log in as. A trust mechanism should be
## pre-established.
user: string; user: string;
## The remote host to which to transfer logs.
host: string; host: string;
## The path/directory on the remote host to send logs.
path: string; path: string;
}; };
## A table indexed by a particular log writer and filter path, that yields ## A table indexed by a particular log writer and filter path, that yields
## a set remote destinations. The :bro:id:`Log::scp_postprocessor` ## a set remote destinations. The :bro:id:`Log::scp_postprocessor`
## function queries this table upon log rotation and performs a secure ## function queries this table upon log rotation and performs a secure
## copy of the rotated-log to each destination in the set. ## copy of the rotated-log to each destination in the set. This
## table can be modified at run-time.
global scp_destinations: table[Writer, string] of set[SCPDestination]; global scp_destinations: table[Writer, string] of set[SCPDestination];
## Default naming format for timestamps embedded into log filenames ## Default naming format for timestamps embedded into log filenames

View file

@ -1,4 +1,5 @@
##! Interface for the ascii log writer. ##! Interface for the ASCII log writer. Redefinable options are available
##! to tweak the output format of ASCII logs.
module LogAscii; module LogAscii;
@ -7,7 +8,8 @@ export {
## into files. This is primarily for debugging purposes. ## into files. This is primarily for debugging purposes.
const output_to_stdout = F &redef; const output_to_stdout = F &redef;
## If true, include a header line with column names. ## If true, include a header line with column names and description
## of the other ASCII logging options that were used.
const include_header = T &redef; const include_header = T &redef;
## Prefix for the header line if included. ## Prefix for the header line if included.

View file

@ -13,11 +13,11 @@
module Metrics; module Metrics;
export { export {
## This value allows a user to decide how large of result groups the ## Allows a user to decide how large of result groups the
## workers should transmit values. ## workers should transmit values for cluster metric aggregation.
const cluster_send_in_groups_of = 50 &redef; const cluster_send_in_groups_of = 50 &redef;
## This is the percent of the full threshold value that needs to be met ## The percent of the full threshold value that needs to be met
## on a single worker for that worker to send the value to its manager in ## on a single worker for that worker to send the value to its manager in
## order for it to request a global view for that value. There is no ## order for it to request a global view for that value. There is no
## requirement that the manager requests a global view for the index ## requirement that the manager requests a global view for the index
@ -25,11 +25,11 @@ export {
## recently. ## recently.
const cluster_request_global_view_percent = 0.1 &redef; const cluster_request_global_view_percent = 0.1 &redef;
## This event is sent by the manager in a cluster to initiate the ## Event sent by the manager in a cluster to initiate the
## collection of metrics values for a filter. ## collection of metrics values for a filter.
global cluster_filter_request: event(uid: string, id: ID, filter_name: string); global cluster_filter_request: event(uid: string, id: ID, filter_name: string);
## This event is sent by nodes that are collecting metrics after receiving ## Event sent by nodes that are collecting metrics after receiving
## a request for the metric filter from the manager. ## a request for the metric filter from the manager.
global cluster_filter_response: event(uid: string, id: ID, filter_name: string, data: MetricTable, done: bool); global cluster_filter_response: event(uid: string, id: ID, filter_name: string, data: MetricTable, done: bool);
@ -40,12 +40,12 @@ export {
global cluster_index_request: event(uid: string, id: ID, filter_name: string, index: Index); global cluster_index_request: event(uid: string, id: ID, filter_name: string, index: Index);
## This event is sent by nodes in response to a ## This event is sent by nodes in response to a
## :bro:id:`cluster_index_request` event. ## :bro:id:`Metrics::cluster_index_request` event.
global cluster_index_response: event(uid: string, id: ID, filter_name: string, index: Index, val: count); global cluster_index_response: event(uid: string, id: ID, filter_name: string, index: Index, val: count);
## This is sent by workers to indicate that they crossed the percent of the ## This is sent by workers to indicate that they crossed the percent of the
## current threshold by the percentage defined globally in ## current threshold by the percentage defined globally in
## :bro:id:`cluster_request_global_view_percent` ## :bro:id:`Metrics::cluster_request_global_view_percent`
global cluster_index_intermediate_response: event(id: Metrics::ID, filter_name: string, index: Metrics::Index, val: count); global cluster_index_intermediate_response: event(id: Metrics::ID, filter_name: string, index: Metrics::Index, val: count);
## This event is scheduled internally on workers to send result chunks. ## This event is scheduled internally on workers to send result chunks.

View file

@ -1,13 +1,16 @@
##! This is the implementation of the metrics framework. ##! The metrics framework provides a way to count and measure data.
@load base/frameworks/notice @load base/frameworks/notice
module Metrics; module Metrics;
export { export {
## The metrics logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## Identifiers for metrics to collect.
type ID: enum { type ID: enum {
## Blank placeholder value.
NOTHING, NOTHING,
}; };
@ -15,10 +18,13 @@ export {
## current value to the logging stream. ## current value to the logging stream.
const default_break_interval = 15mins &redef; const default_break_interval = 15mins &redef;
## This is the interval for how often notices will happen after they have ## This is the interval for how often threshold based notices will happen
## already fired. ## after they have already fired.
const renotice_interval = 1hr &redef; const renotice_interval = 1hr &redef;
## Represents a thing which is having metrics collected for it. An instance
## of this record type and a :bro:type:`Metrics::ID` together represent a
## single measurement.
type Index: record { type Index: record {
## Host is the value to which this metric applies. ## Host is the value to which this metric applies.
host: addr &optional; host: addr &optional;
@ -37,17 +43,30 @@ export {
network: subnet &optional; network: subnet &optional;
} &log; } &log;
## The record type that is used for logging metrics.
type Info: record { type Info: record {
## Timestamp at which the metric was "broken".
ts: time &log; ts: time &log;
## What measurement the metric represents.
metric_id: ID &log; metric_id: ID &log;
## The name of the filter being logged. :bro:type:`Metrics::ID` values
## can have multiple filters which represent different perspectives on
## the data so this is necessary to understand the value.
filter_name: string &log; filter_name: string &log;
## What the metric value applies to.
index: Index &log; index: Index &log;
## The simple numeric value of the metric.
value: count &log; value: count &log;
}; };
# TODO: configure a metrics filter logging stream to log the current # TODO: configure a metrics filter logging stream to log the current
# metrics configuration in case someone is looking through # metrics configuration in case someone is looking through
# old logs and the configuration has changed since then. # old logs and the configuration has changed since then.
## Filters define how the data from a metric is aggregated and handled.
## Filters can be used to set how often the measurements are cut or "broken"
## and logged or how the data within them is aggregated. It's also
## possible to disable logging and use filters for thresholding.
type Filter: record { type Filter: record {
## The :bro:type:`Metrics::ID` that this filter applies to. ## The :bro:type:`Metrics::ID` that this filter applies to.
id: ID &optional; id: ID &optional;
@ -62,7 +81,7 @@ export {
aggregation_mask: count &optional; aggregation_mask: count &optional;
## This is essentially a mapping table between addresses and subnets. ## This is essentially a mapping table between addresses and subnets.
aggregation_table: table[subnet] of subnet &optional; aggregation_table: table[subnet] of subnet &optional;
## The interval at which the metric should be "broken" and written ## The interval at which this filter should be "broken" and written
## to the logging stream. The counters are also reset to zero at ## to the logging stream. The counters are also reset to zero at
## this time so any threshold based detection needs to be set to a ## this time so any threshold based detection needs to be set to a
## number that should be expected to happen within this period. ## number that should be expected to happen within this period.
@ -79,7 +98,7 @@ export {
notice_threshold: count &optional; notice_threshold: count &optional;
## A series of thresholds at which to generate notices. ## A series of thresholds at which to generate notices.
notice_thresholds: vector of count &optional; notice_thresholds: vector of count &optional;
## How often this notice should be raised for this metric index. It ## How often this notice should be raised for this filter. It
## will be generated everytime it crosses a threshold, but if the ## will be generated everytime it crosses a threshold, but if the
## $break_interval is set to 5mins and this is set to 1hr the notice ## $break_interval is set to 5mins and this is set to 1hr the notice
## only be generated once per hour even if something crosses the ## only be generated once per hour even if something crosses the
@ -87,15 +106,43 @@ export {
notice_freq: interval &optional; notice_freq: interval &optional;
}; };
## Function to associate a metric filter with a metric ID.
##
## id: The metric ID that the filter should be associated with.
##
## filter: The record representing the filter configuration.
global add_filter: function(id: ID, filter: Filter); global add_filter: function(id: ID, filter: Filter);
## Add data into a :bro:type:`Metrics::ID`. This should be called when
## a script has measured some point value and is ready to increment the
## counters.
##
## id: The metric ID that the data represents.
##
## index: The metric index that the value is to be added to.
##
## increment: How much to increment the counter by.
global add_data: function(id: ID, index: Index, increment: count); global add_data: function(id: ID, index: Index, increment: count);
## Helper function to represent a :bro:type:`Metrics::Index` value as
## a simple string
##
## index: The metric index that is to be converted into a string.
##
## Returns: A string reprentation of the metric index.
global index2str: function(index: Index): string; global index2str: function(index: Index): string;
# This is the event that is used to "finish" metrics and adapt the metrics ## Event that is used to "finish" metrics and adapt the metrics
# framework for clustered or non-clustered usage. ## framework for clustered or non-clustered usage.
##
## ..note: This is primarily intended for internal use.
global log_it: event(filter: Filter); global log_it: event(filter: Filter);
## Event to access metrics records as they are passed to the logging framework.
global log_metrics: event(rec: Info); global log_metrics: event(rec: Info);
## Type to store a table of metrics values. Interal use only!
type MetricTable: table[Index] of count &default=0;
} }
redef record Notice::Info += { redef record Notice::Info += {
@ -105,7 +152,6 @@ redef record Notice::Info += {
global metric_filters: table[ID] of vector of Filter = table(); global metric_filters: table[ID] of vector of Filter = table();
global filter_store: table[ID, string] of Filter = table(); global filter_store: table[ID, string] of Filter = table();
type MetricTable: table[Index] of count &default=0;
# This is indexed by metric ID and stream filter name. # This is indexed by metric ID and stream filter name.
global store: table[ID, string] of MetricTable = table() &default=table(); global store: table[ID, string] of MetricTable = table() &default=table();

View file

@ -1,3 +1,8 @@
##! Adds a new notice action type which can be used to email notices
##! to the administrators of a particular address space as set by
##! :bro:id:`Site::local_admins` if the notice contains a source
##! or destination address that lies within their space.
@load ../main @load ../main
@load base/utils/site @load base/utils/site
@ -6,8 +11,8 @@ module Notice;
export { export {
redef enum Action += { redef enum Action += {
## Indicate that the generated email should be addressed to the ## Indicate that the generated email should be addressed to the
## appropriate email addresses as found in the ## appropriate email addresses as found by the
## :bro:id:`Site::addr_to_emails` variable based on the relevant ## :bro:id:`Site::get_emails` function based on the relevant
## address or addresses indicated in the notice. ## address or addresses indicated in the notice.
ACTION_EMAIL_ADMIN ACTION_EMAIL_ADMIN
}; };

View file

@ -1,3 +1,5 @@
##! Allows configuration of a pager email address to which notices can be sent.
@load ../main @load ../main
module Notice; module Notice;
@ -5,7 +7,7 @@ module Notice;
export { export {
redef enum Action += { redef enum Action += {
## Indicates that the notice should be sent to the pager email address ## Indicates that the notice should be sent to the pager email address
## configured in the :bro:id:`mail_page_dest` variable. ## configured in the :bro:id:`Notice::mail_page_dest` variable.
ACTION_PAGE ACTION_PAGE
}; };

View file

@ -1,6 +1,6 @@
#! Notice extension that mails out a pretty-printed version of alarm.log ##! Notice extension that mails out a pretty-printed version of alarm.log
#! in regular intervals, formatted for better human readability. If activated, ##! in regular intervals, formatted for better human readability. If activated,
#! that replaces the default summary mail having the raw log output. ##! that replaces the default summary mail having the raw log output.
@load base/frameworks/cluster @load base/frameworks/cluster
@load ../main @load ../main
@ -14,9 +14,8 @@ export {
## Address to send the pretty-printed reports to. Default if not set is ## Address to send the pretty-printed reports to. Default if not set is
## :bro:id:`Notice::mail_dest`. ## :bro:id:`Notice::mail_dest`.
const mail_dest_pretty_printed = "" &redef; const mail_dest_pretty_printed = "" &redef;
## If an address from one of these networks is reported, we mark ## If an address from one of these networks is reported, we mark
## the entry with an addition quote symbol (i.e., ">"). Many MUAs ## the entry with an additional quote symbol (i.e., ">"). Many MUAs
## then highlight such lines differently. ## then highlight such lines differently.
global flag_nets: set[subnet] &redef; global flag_nets: set[subnet] &redef;

View file

@ -1,4 +1,6 @@
##! Implements notice functionality across clusters. ##! Implements notice functionality across clusters. Worker nodes
##! will disable notice/alarm logging streams and forward notice
##! events to the manager node for logging/processing.
@load ./main @load ./main
@load base/frameworks/cluster @load base/frameworks/cluster
@ -7,10 +9,15 @@ module Notice;
export { export {
## This is the event used to transport notices on the cluster. ## This is the event used to transport notices on the cluster.
##
## n: The notice information to be sent to the cluster manager for
## further processing.
global cluster_notice: event(n: Notice::Info); global cluster_notice: event(n: Notice::Info);
} }
## Manager can communicate notice suppression to workers.
redef Cluster::manager2worker_events += /Notice::begin_suppression/; redef Cluster::manager2worker_events += /Notice::begin_suppression/;
## Workers needs need ability to forward notices to manager.
redef Cluster::worker2manager_events += /Notice::cluster_notice/; redef Cluster::worker2manager_events += /Notice::cluster_notice/;
@if ( Cluster::local_node_type() != Cluster::MANAGER ) @if ( Cluster::local_node_type() != Cluster::MANAGER )

View file

@ -1,3 +1,8 @@
##! Loading this script extends the :bro:enum:`Notice::ACTION_EMAIL` action
##! by appending to the email the hostnames associated with
##! :bro:type:`Notice::Info`'s *src* and *dst* fields as determined by a
##! DNS lookup.
@load ../main @load ../main
module Notice; module Notice;

View file

@ -2,8 +2,7 @@
##! are odd or potentially bad. Decisions of the meaning of various notices ##! are odd or potentially bad. Decisions of the meaning of various notices
##! need to be done per site because Bro does not ship with assumptions about ##! need to be done per site because Bro does not ship with assumptions about
##! what is bad activity for sites. More extensive documetation about using ##! what is bad activity for sites. More extensive documetation about using
##! the notice framework can be found in the documentation section of the ##! the notice framework can be found in :doc:`/notice`.
##! http://www.bro-ids.org/ website.
module Notice; module Notice;
@ -21,10 +20,10 @@ export {
## Scripts creating new notices need to redef this enum to add their own ## Scripts creating new notices need to redef this enum to add their own
## specific notice types which would then get used when they call the ## specific notice types which would then get used when they call the
## :bro:id:`NOTICE` function. The convention is to give a general category ## :bro:id:`NOTICE` function. The convention is to give a general category
## along with the specific notice separating words with underscores and using ## along with the specific notice separating words with underscores and
## leading capitals on each word except for abbreviations which are kept in ## using leading capitals on each word except for abbreviations which are
## all capitals. For example, SSH::Login is for heuristically guessed ## kept in all capitals. For example, SSH::Login is for heuristically
## successful SSH logins. ## guessed successful SSH logins.
type Type: enum { type Type: enum {
## Notice reporting a count of how often a notice occurred. ## Notice reporting a count of how often a notice occurred.
Tally, Tally,
@ -49,26 +48,37 @@ export {
}; };
## The notice framework is able to do automatic notice supression by ## The notice framework is able to do automatic notice supression by
## utilizing the $identifier field in :bro:type:`Info` records. ## utilizing the $identifier field in :bro:type:`Notice::Info` records.
## Set this to "0secs" to completely disable automated notice suppression. ## Set this to "0secs" to completely disable automated notice suppression.
const default_suppression_interval = 1hrs &redef; const default_suppression_interval = 1hrs &redef;
type Info: record { type Info: record {
## An absolute time indicating when the notice occurred, defaults
## to the current network time.
ts: time &log &optional; ts: time &log &optional;
## A connection UID which uniquely identifies the endpoints
## concerned with the notice.
uid: string &log &optional; uid: string &log &optional;
## A connection 4-tuple identifying the endpoints concerned with the
## notice.
id: conn_id &log &optional; id: conn_id &log &optional;
## These are shorthand ways of giving the uid and id to a notice. The ## A shorthand way of giving the uid and id to a notice. The
## reference to the actual connection will be deleted after applying ## reference to the actual connection will be deleted after applying
## the notice policy. ## the notice policy.
conn: connection &optional; conn: connection &optional;
## A shorthand way of giving the uid and id to a notice. The
## reference to the actual connection will be deleted after applying
## the notice policy.
iconn: icmp_conn &optional; iconn: icmp_conn &optional;
## The transport protocol. Filled automatically when either conn, iconn ## The transport protocol. Filled automatically when either conn, iconn
## or p is specified. ## or p is specified.
proto: transport_proto &log &optional; proto: transport_proto &log &optional;
## The :bro:enum:`Notice::Type` of the notice. ## The :bro:type:`Notice::Type` of the notice.
note: Type &log; note: Type &log;
## The human readable message for the notice. ## The human readable message for the notice.
msg: string &log &optional; msg: string &log &optional;
@ -105,7 +115,7 @@ export {
## Adding a string "token" to this set will cause the notice framework's ## Adding a string "token" to this set will cause the notice framework's
## built-in emailing functionality to delay sending the email until ## built-in emailing functionality to delay sending the email until
## either the token has been removed or the email has been delayed ## either the token has been removed or the email has been delayed
## for :bro:id:`max_email_delay`. ## for :bro:id:`Notice::max_email_delay`.
email_delay_tokens: set[string] &optional; email_delay_tokens: set[string] &optional;
## This field is to be provided when a notice is generated for the ## This field is to be provided when a notice is generated for the
@ -151,8 +161,9 @@ export {
## This is the record that defines the items that make up the notice policy. ## This is the record that defines the items that make up the notice policy.
type PolicyItem: record { type PolicyItem: record {
## This is the exact positional order in which the :bro:type:`PolicyItem` ## This is the exact positional order in which the
## records are checked. This is set internally by the notice framework. ## :bro:type:`Notice::PolicyItem` records are checked.
## This is set internally by the notice framework.
position: count &log &optional; position: count &log &optional;
## Define the priority for this check. Items are checked in ordered ## Define the priority for this check. Items are checked in ordered
## from highest value (10) to lowest value (0). ## from highest value (10) to lowest value (0).
@ -173,8 +184,8 @@ export {
suppress_for: interval &log &optional; suppress_for: interval &log &optional;
}; };
## This is the where the :bro:id:`Notice::policy` is defined. All notice ## Defines a notice policy that is extensible on a per-site basis.
## processing is done through this variable. ## All notice processing is done through this variable.
const policy: set[PolicyItem] = { const policy: set[PolicyItem] = {
[$pred(n: Notice::Info) = { return (n$note in Notice::ignored_types); }, [$pred(n: Notice::Info) = { return (n$note in Notice::ignored_types); },
$halt=T, $priority = 9], $halt=T, $priority = 9],
@ -203,8 +214,9 @@ export {
## Local system sendmail program. ## Local system sendmail program.
const sendmail = "/usr/sbin/sendmail" &redef; const sendmail = "/usr/sbin/sendmail" &redef;
## Email address to send notices with the :bro:enum:`ACTION_EMAIL` action ## Email address to send notices with the :bro:enum:`Notice::ACTION_EMAIL`
## or to send bulk alarm logs on rotation with :bro:enum:`ACTION_ALARM`. ## action or to send bulk alarm logs on rotation with
## :bro:enum:`Notice::ACTION_ALARM`.
const mail_dest = "" &redef; const mail_dest = "" &redef;
## Address that emails will be from. ## Address that emails will be from.
@ -219,14 +231,20 @@ export {
## A log postprocessing function that implements emailing the contents ## A log postprocessing function that implements emailing the contents
## of a log upon rotation to any configured :bro:id:`Notice::mail_dest`. ## of a log upon rotation to any configured :bro:id:`Notice::mail_dest`.
## The rotated log is removed upon being sent. ## The rotated log is removed upon being sent.
##
## info: A record containing the rotated log file information.
##
## Returns: True.
global log_mailing_postprocessor: function(info: Log::RotationInfo): bool; global log_mailing_postprocessor: function(info: Log::RotationInfo): bool;
## This is the event that is called as the entry point to the ## This is the event that is called as the entry point to the
## notice framework by the global :bro:id:`NOTICE` function. By the time ## notice framework by the global :bro:id:`NOTICE` function. By the time
## this event is generated, default values have already been filled out in ## this event is generated, default values have already been filled out in
## the :bro:type:`Notice::Info` record and synchronous functions in the ## the :bro:type:`Notice::Info` record and synchronous functions in the
## :bro:id:`Notice:sync_functions` have already been called. The notice ## :bro:id:`Notice::sync_functions` have already been called. The notice
## policy has also been applied. ## policy has also been applied.
##
## n: The record containing notice data.
global notice: event(n: Info); global notice: event(n: Info);
## This is a set of functions that provide a synchronous way for scripts ## This is a set of functions that provide a synchronous way for scripts
@ -243,30 +261,55 @@ export {
const sync_functions: set[function(n: Notice::Info)] = set() &redef; const sync_functions: set[function(n: Notice::Info)] = set() &redef;
## This event is generated when a notice begins to be suppressed. ## This event is generated when a notice begins to be suppressed.
##
## n: The record containing notice data regarding the notice type
## about to be suppressed.
global begin_suppression: event(n: Notice::Info); global begin_suppression: event(n: Notice::Info);
## This event is generated on each occurence of an event being suppressed. ## This event is generated on each occurence of an event being suppressed.
##
## n: The record containing notice data regarding the notice type
## being suppressed.
global suppressed: event(n: Notice::Info); global suppressed: event(n: Notice::Info);
## This event is generated when a notice stops being suppressed. ## This event is generated when a notice stops being suppressed.
##
## n: The record containing notice data regarding the notice type
## that was being suppressed.
global end_suppression: event(n: Notice::Info); global end_suppression: event(n: Notice::Info);
## Call this function to send a notice in an email. It is already used ## Call this function to send a notice in an email. It is already used
## by default with the built in :bro:enum:`ACTION_EMAIL` and ## by default with the built in :bro:enum:`Notice::ACTION_EMAIL` and
## :bro:enum:`ACTION_PAGE` actions. ## :bro:enum:`Notice::ACTION_PAGE` actions.
##
## n: The record of notice data to email.
##
## dest: The intended recipient of the notice email.
##
## extend: Whether to extend the email using the ``email_body_sections``
## field of *n*.
global email_notice_to: function(n: Info, dest: string, extend: bool); global email_notice_to: function(n: Info, dest: string, extend: bool);
## Constructs mail headers to which an email body can be appended for ## Constructs mail headers to which an email body can be appended for
## sending with sendmail. ## sending with sendmail.
##
## subject_desc: a subject string to use for the mail ## subject_desc: a subject string to use for the mail
##
## dest: recipient string to use for the mail ## dest: recipient string to use for the mail
##
## Returns: a string of mail headers to which an email body can be appended ## Returns: a string of mail headers to which an email body can be appended
global email_headers: function(subject_desc: string, dest: string): string; global email_headers: function(subject_desc: string, dest: string): string;
## This event can be handled to access the :bro:type:`Info` ## This event can be handled to access the :bro:type:`Notice::Info`
## record as it is sent on to the logging framework. ## record as it is sent on to the logging framework.
##
## rec: The record containing notice data before it is logged.
global log_notice: event(rec: Info); global log_notice: event(rec: Info);
## This is an internal wrapper for the global NOTICE function. Please ## This is an internal wrapper for the global :bro:id:`NOTICE` function;
## disregard. ## disregard.
##
## n: The record of notice data.
global internal_NOTICE: function(n: Notice::Info); global internal_NOTICE: function(n: Notice::Info);
} }
@ -447,7 +490,8 @@ event notice(n: Notice::Info) &priority=-5
} }
## This determines if a notice is being suppressed. It is only used ## This determines if a notice is being suppressed. It is only used
## internally as part of the mechanics for the global NOTICE function. ## internally as part of the mechanics for the global :bro:id:`NOTICE`
## function.
function is_being_suppressed(n: Notice::Info): bool function is_being_suppressed(n: Notice::Info): bool
{ {
if ( n?$identifier && [n$note, n$identifier] in suppressing ) if ( n?$identifier && [n$note, n$identifier] in suppressing )

View file

@ -1,3 +1,12 @@
##! This script provides a default set of actions to take for "weird activity"
##! events generated from Bro's event engine. Weird activity is defined as
##! unusual or exceptional activity that can indicate malformed connections,
##! traffic that doesn't conform to a particular protocol, malfunctioning
##! or misconfigured hardware, or even an attacker attempting to avoid/confuse
##! a sensor. Without context, it's hard to judge whether a particular
##! category of weird activity is interesting, but this script provides
##! a starting point for the user.
@load base/utils/conn-ids @load base/utils/conn-ids
@load base/utils/site @load base/utils/site
@load ./main @load ./main
@ -5,6 +14,7 @@
module Weird; module Weird;
export { export {
## The weird logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
redef enum Notice::Type += { redef enum Notice::Type += {
@ -12,6 +22,7 @@ export {
Activity, Activity,
}; };
## The record type which contains the column fields of the weird log.
type Info: record { type Info: record {
## The time when the weird occurred. ## The time when the weird occurred.
ts: time &log; ts: time &log;
@ -32,19 +43,32 @@ export {
peer: string &log &optional; peer: string &log &optional;
}; };
## Types of actions that may be taken when handling weird activity events.
type Action: enum { type Action: enum {
## A dummy action indicating the user does not care what internal
## decision is made regarding a given type of weird.
ACTION_UNSPECIFIED, ACTION_UNSPECIFIED,
## No action is to be taken.
ACTION_IGNORE, ACTION_IGNORE,
## Log the weird event every time it occurs.
ACTION_LOG, ACTION_LOG,
## Log the weird event only once.
ACTION_LOG_ONCE, ACTION_LOG_ONCE,
## Log the weird event once per connection.
ACTION_LOG_PER_CONN, ACTION_LOG_PER_CONN,
## Log the weird event once per originator host.
ACTION_LOG_PER_ORIG, ACTION_LOG_PER_ORIG,
## Always generate a notice associated with the weird event.
ACTION_NOTICE, ACTION_NOTICE,
## Generate a notice associated with the weird event only once.
ACTION_NOTICE_ONCE, ACTION_NOTICE_ONCE,
## Generate a notice for the weird event once per connection.
ACTION_NOTICE_PER_CONN, ACTION_NOTICE_PER_CONN,
## Generate a notice for the weird event once per originator host.
ACTION_NOTICE_PER_ORIG, ACTION_NOTICE_PER_ORIG,
}; };
## A table specifying default/recommended actions per weird type.
const actions: table[string] of Action = { const actions: table[string] of Action = {
["unsolicited_SYN_response"] = ACTION_IGNORE, ["unsolicited_SYN_response"] = ACTION_IGNORE,
["above_hole_data_without_any_acks"] = ACTION_LOG, ["above_hole_data_without_any_acks"] = ACTION_LOG,
@ -201,7 +225,7 @@ export {
["fragment_overlap"] = ACTION_LOG_PER_ORIG, ["fragment_overlap"] = ACTION_LOG_PER_ORIG,
["fragment_protocol_inconsistency"] = ACTION_LOG, ["fragment_protocol_inconsistency"] = ACTION_LOG,
["fragment_size_inconsistency"] = ACTION_LOG_PER_ORIG, ["fragment_size_inconsistency"] = ACTION_LOG_PER_ORIG,
## These do indeed happen! # These do indeed happen!
["fragment_with_DF"] = ACTION_LOG, ["fragment_with_DF"] = ACTION_LOG,
["incompletely_captured_fragment"] = ACTION_LOG, ["incompletely_captured_fragment"] = ACTION_LOG,
["bad_IP_checksum"] = ACTION_LOG_PER_ORIG, ["bad_IP_checksum"] = ACTION_LOG_PER_ORIG,
@ -215,8 +239,8 @@ export {
## and weird name into this set. ## and weird name into this set.
const ignore_hosts: set[addr, string] &redef; const ignore_hosts: set[addr, string] &redef;
# But don't ignore these (for the weird file), it's handy keeping ## Don't ignore repeats for weirds in this set. For example,
# track of clustered checksum errors. ## it's handy keeping track of clustered checksum errors.
const weird_do_not_ignore_repeats = { const weird_do_not_ignore_repeats = {
"bad_IP_checksum", "bad_TCP_checksum", "bad_UDP_checksum", "bad_IP_checksum", "bad_TCP_checksum", "bad_UDP_checksum",
"bad_ICMP_checksum", "bad_ICMP_checksum",
@ -237,6 +261,10 @@ export {
## duplicate notices from being raised. ## duplicate notices from being raised.
global did_notice: set[string, string] &create_expire=1day &redef; global did_notice: set[string, string] &create_expire=1day &redef;
## Handlers of this event are invoked one per write to the weird
## logging stream before the data is actually written.
##
## rec: The weird columns about to be logged to the weird stream.
global log_weird: event(rec: Info); global log_weird: event(rec: Info);
} }

View file

@ -9,17 +9,22 @@
module PacketFilter; module PacketFilter;
export { export {
## Add the packet filter logging stream.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## Add notice types related to packet filter errors.
redef enum Notice::Type += { redef enum Notice::Type += {
## This notice is generated if a packet filter is unable to be compiled. ## This notice is generated if a packet filter is unable to be compiled.
Compile_Failure, Compile_Failure,
## This notice is generated if a packet filter is unable to be installed. ## This notice is generated if a packet filter is fails to install.
Install_Failure, Install_Failure,
}; };
## The record type defining columns to be logged in the packet filter
## logging stream.
type Info: record { type Info: record {
## The time at which the packet filter installation attempt was made.
ts: time &log; ts: time &log;
## This is a string representation of the node that applied this ## This is a string representation of the node that applied this
@ -40,7 +45,7 @@ export {
## By default, Bro will examine all packets. If this is set to false, ## By default, Bro will examine all packets. If this is set to false,
## it will dynamically build a BPF filter that only select protocols ## it will dynamically build a BPF filter that only select protocols
## for which the user has loaded a corresponding analysis script. ## for which the user has loaded a corresponding analysis script.
## The latter used to be default for Bro versions < 1.6. That has now ## The latter used to be default for Bro versions < 2.0. That has now
## changed however to enable port-independent protocol analysis. ## changed however to enable port-independent protocol analysis.
const all_packets = T &redef; const all_packets = T &redef;

View file

@ -1,4 +1,6 @@
##! This script reports on packet loss from the various packet sources. ##! This script reports on packet loss from the various packet sources.
##! When Bro is reading input from trace files, this script will not
##! report any packet loss statistics.
@load base/frameworks/notice @load base/frameworks/notice
@ -6,7 +8,7 @@ module PacketFilter;
export { export {
redef enum Notice::Type += { redef enum Notice::Type += {
## Bro reported packets dropped by the packet filter. ## Indicates packets were dropped by the packet filter.
Dropped_Packets, Dropped_Packets,
}; };

View file

@ -1,21 +1,36 @@
##! This framework is intended to create an output and filtering path for ##! This framework is intended to create an output and filtering path for
##! internal messages/warnings/errors. It should typically be loaded to ##! internal messages/warnings/errors. It should typically be loaded to
##! avoid Bro spewing internal messages to standard error. ##! avoid Bro spewing internal messages to standard error and instead log
##! them to a file in a standard way. Note that this framework deals with
##! the handling of internally-generated reporter messages, for the
##! interface into actually creating reporter messages from the scripting
##! layer, use the built-in functions in :doc:`/scripts/base/reporter.bif`.
module Reporter; module Reporter;
export { export {
## The reporter logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## An indicator of reporter message severity.
type Level: enum { type Level: enum {
## Informational, not needing specific attention.
INFO, INFO,
## Warning of a potential problem.
WARNING, WARNING,
## A non-fatal error that should be addressed, but doesn't
## terminate program execution.
ERROR ERROR
}; };
## The record type which contains the column fields of the reporter log.
type Info: record { type Info: record {
## The network time at which the reporter event was generated.
ts: time &log; ts: time &log;
## The severity of the reporter message.
level: Level &log; level: Level &log;
## An info/warning/error message that could have either been
## generated from the internal Bro core or at the scripting-layer.
message: string &log; message: string &log;
## This is the location in a Bro script where the message originated. ## This is the location in a Bro script where the message originated.
## Not all reporter messages will have locations in them though. ## Not all reporter messages will have locations in them though.

View file

@ -1,30 +1,36 @@
##! Script level signature support. ##! Script level signature support. See the
##! :doc:`signature documentation </signatures>` for more information about
##! Bro's signature engine.
@load base/frameworks/notice @load base/frameworks/notice
module Signatures; module Signatures;
export { export {
## Add various signature-related notice types.
redef enum Notice::Type += { redef enum Notice::Type += {
## Generic for alarm-worthy ## Generic notice type for notice-worthy signature matches.
Sensitive_Signature, Sensitive_Signature,
## Host has triggered many signatures on the same host. The number of ## Host has triggered many signatures on the same host. The number of
## signatures is defined by the :bro:id:`vert_scan_thresholds` variable. ## signatures is defined by the
## :bro:id:`Signatures::vert_scan_thresholds` variable.
Multiple_Signatures, Multiple_Signatures,
## Host has triggered the same signature on multiple hosts as defined by the ## Host has triggered the same signature on multiple hosts as defined
## :bro:id:`horiz_scan_thresholds` variable. ## by the :bro:id:`Signatures::horiz_scan_thresholds` variable.
Multiple_Sig_Responders, Multiple_Sig_Responders,
## The same signature has triggered multiple times for a host. The number ## The same signature has triggered multiple times for a host. The
## of times the signature has be trigger is defined by the ## number of times the signature has been triggered is defined by the
## :bro:id:`count_thresholds` variable. To generate this notice, the ## :bro:id:`Signatures::count_thresholds` variable. To generate this
## :bro:enum:`SIG_COUNT_PER_RESP` action must be set for the signature. ## notice, the :bro:enum:`Signatures::SIG_COUNT_PER_RESP` action must
## bet set for the signature.
Count_Signature, Count_Signature,
## Summarize the number of times a host triggered a signature. The ## Summarize the number of times a host triggered a signature. The
## interval between summaries is defined by the :bro:id:`summary_interval` ## interval between summaries is defined by the
## variable. ## :bro:id:`Signatures::summary_interval` variable.
Signature_Summary, Signature_Summary,
}; };
## The signature logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## These are the default actions you can apply to signature matches. ## These are the default actions you can apply to signature matches.
@ -39,8 +45,8 @@ export {
SIG_QUIET, SIG_QUIET,
## Generate a notice. ## Generate a notice.
SIG_LOG, SIG_LOG,
## The same as :bro:enum:`SIG_FILE`, but ignore for aggregate/scan ## The same as :bro:enum:`Signatures::SIG_LOG`, but ignore for
## processing. ## aggregate/scan processing.
SIG_FILE_BUT_NO_SCAN, SIG_FILE_BUT_NO_SCAN,
## Generate a notice and set it to be alarmed upon. ## Generate a notice and set it to be alarmed upon.
SIG_ALARM, SIG_ALARM,
@ -49,22 +55,33 @@ export {
## Alarm once and then never again. ## Alarm once and then never again.
SIG_ALARM_ONCE, SIG_ALARM_ONCE,
## Count signatures per responder host and alarm with the ## Count signatures per responder host and alarm with the
## :bro:enum:`Count_Signature` notice if a threshold defined by ## :bro:enum:`Signatures::Count_Signature` notice if a threshold
## :bro:id:`count_thresholds` is reached. ## defined by :bro:id:`Signatures::count_thresholds` is reached.
SIG_COUNT_PER_RESP, SIG_COUNT_PER_RESP,
## Don't alarm, but generate per-orig summary. ## Don't alarm, but generate per-orig summary.
SIG_SUMMARY, SIG_SUMMARY,
}; };
## The record type which contains the column fields of the signature log.
type Info: record { type Info: record {
## The network time at which a signature matching type of event to
## be logged has occurred.
ts: time &log; ts: time &log;
## The host which triggered the signature match event.
src_addr: addr &log &optional; src_addr: addr &log &optional;
## The host port on which the signature-matching activity occurred.
src_port: port &log &optional; src_port: port &log &optional;
## The destination host which was sent the payload that triggered the
## signature match.
dst_addr: addr &log &optional; dst_addr: addr &log &optional;
## The destination host port which was sent the payload that triggered
## the signature match.
dst_port: port &log &optional; dst_port: port &log &optional;
## Notice associated with signature event ## Notice associated with signature event
note: Notice::Type &log; note: Notice::Type &log;
## The name of the signature that matched.
sig_id: string &log &optional; sig_id: string &log &optional;
## A more descriptive message of the signature-matching event.
event_msg: string &log &optional; event_msg: string &log &optional;
## Extracted payload data or extra message. ## Extracted payload data or extra message.
sub_msg: string &log &optional; sub_msg: string &log &optional;
@ -82,22 +99,26 @@ export {
## Signature IDs that should always be ignored. ## Signature IDs that should always be ignored.
const ignored_ids = /NO_DEFAULT_MATCHES/ &redef; const ignored_ids = /NO_DEFAULT_MATCHES/ &redef;
## Alarm if, for a pair [orig, signature], the number of different ## Generate a notice if, for a pair [orig, signature], the number of
## responders has reached one of the thresholds. ## different responders has reached one of the thresholds.
const horiz_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef; const horiz_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef;
## Alarm if, for a pair [orig, resp], the number of different signature ## Generate a notice if, for a pair [orig, resp], the number of different
## matches has reached one of the thresholds. ## signature matches has reached one of the thresholds.
const vert_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef; const vert_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef;
## Alarm if a :bro:enum:`SIG_COUNT_PER_RESP` signature is triggered as ## Generate a notice if a :bro:enum:`Signatures::SIG_COUNT_PER_RESP`
## often as given by one of these thresholds. ## signature is triggered as often as given by one of these thresholds.
const count_thresholds = { 5, 10, 50, 100, 500, 1000, 10000, 1000000, } &redef; const count_thresholds = { 5, 10, 50, 100, 500, 1000, 10000, 1000000, } &redef;
## The interval between when :bro:id:`Signature_Summary` notices are ## The interval between when :bro:enum:`Signatures::Signature_Summary`
## generated. ## notice are generated.
const summary_interval = 1 day &redef; const summary_interval = 1 day &redef;
## This event can be handled to access/alter data about to be logged
## to the signature logging stream.
##
## rec: The record of signature data about to be logged.
global log_signature: event(rec: Info); global log_signature: event(rec: Info);
} }

View file

@ -1,5 +1,5 @@
##! This script provides the framework for software version detection and ##! This script provides the framework for software version detection and
##! parsing, but doesn't actually do any detection on it's own. It relys on ##! parsing but doesn't actually do any detection on it's own. It relys on
##! other protocol specific scripts to parse out software from the protocols ##! other protocol specific scripts to parse out software from the protocols
##! that they analyze. The entry point for providing new software detections ##! that they analyze. The entry point for providing new software detections
##! to this framework is through the :bro:id:`Software::found` function. ##! to this framework is through the :bro:id:`Software::found` function.
@ -10,39 +10,44 @@
module Software; module Software;
export { export {
## The software logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## Scripts detecting new types of software need to redef this enum to add
## their own specific software types which would then be used when they
## create :bro:type:`Software::Info` records.
type Type: enum { type Type: enum {
## A placeholder type for when the type of software is not known.
UNKNOWN, UNKNOWN,
OPERATING_SYSTEM,
DATABASE_SERVER,
# There are a number of ways to detect printers on the
# network, we just need to codify them in a script and move
# this out of here. It isn't currently used for anything.
PRINTER,
}; };
## A structure to represent the numeric version of software.
type Version: record { type Version: record {
major: count &optional; ##< Major version number ## Major version number
minor: count &optional; ##< Minor version number major: count &optional;
minor2: count &optional; ##< Minor subversion number ## Minor version number
addl: string &optional; ##< Additional version string (e.g. "beta42") minor: count &optional;
## Minor subversion number
minor2: count &optional;
## Additional version string (e.g. "beta42")
addl: string &optional;
} &log; } &log;
## The record type that is used for representing and logging software.
type Info: record { type Info: record {
## The time at which the software was first detected. ## The time at which the software was detected.
ts: time &log; ts: time &log;
## The IP address detected running the software. ## The IP address detected running the software.
host: addr &log; host: addr &log;
## The type of software detected (e.g. WEB_SERVER) ## The type of software detected (e.g. :bro:enum:`HTTP::SERVER`).
software_type: Type &log &default=UNKNOWN; software_type: Type &log &default=UNKNOWN;
## Name of the software (e.g. Apache) ## Name of the software (e.g. Apache).
name: string &log; name: string &log;
## Version of the software ## Version of the software.
version: Version &log; version: Version &log;
## The full unparsed version string found because the version parsing ## The full unparsed version string found because the version parsing
## doesn't work 100% reliably and this acts as a fall back in the logs. ## doesn't always work reliably in all cases and this acts as a
## fallback in the logs.
unparsed_version: string &log &optional; unparsed_version: string &log &optional;
## This can indicate that this software being detected should ## This can indicate that this software being detected should
@ -55,37 +60,48 @@ export {
force_log: bool &default=F; force_log: bool &default=F;
}; };
## The hosts whose software should be detected and tracked. ## Hosts whose software should be detected and tracked.
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS ## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS
const asset_tracking = LOCAL_HOSTS &redef; const asset_tracking = LOCAL_HOSTS &redef;
## Other scripts should call this function when they detect software. ## Other scripts should call this function when they detect software.
## unparsed_version: This is the full string from which the ## unparsed_version: This is the full string from which the
## :bro:type:`Software::Info` was extracted. ## :bro:type:`Software::Info` was extracted.
##
## id: The connection id where the software was discovered.
##
## info: A record representing the software discovered.
##
## Returns: T if the software was logged, F otherwise. ## Returns: T if the software was logged, F otherwise.
global found: function(id: conn_id, info: Software::Info): bool; global found: function(id: conn_id, info: Software::Info): bool;
## This function can take many software version strings and parse them ## Take many common software version strings and parse them
## into a sensible :bro:type:`Software::Version` record. There are ## into a sensible :bro:type:`Software::Version` record. There are
## still many cases where scripts may have to have their own specific ## still many cases where scripts may have to have their own specific
## version parsing though. ## version parsing though.
##
## unparsed_version: The raw version string.
##
## host: The host where the software was discovered.
##
## software_type: The type of software.
##
## Returns: A complete record ready for the :bro:id:`Software::found` function.
global parse: function(unparsed_version: string, global parse: function(unparsed_version: string,
host: addr, host: addr,
software_type: Type): Info; software_type: Type): Info;
## Compare two versions. ## Compare two version records.
##
## Returns: -1 for v1 < v2, 0 for v1 == v2, 1 for v1 > v2. ## Returns: -1 for v1 < v2, 0 for v1 == v2, 1 for v1 > v2.
## If the numerical version numbers match, the addl string ## If the numerical version numbers match, the addl string
## is compared lexicographically. ## is compared lexicographically.
global cmp_versions: function(v1: Version, v2: Version): int; global cmp_versions: function(v1: Version, v2: Version): int;
## This type represents a set of software. It's used by the ## Type to represent a collection of :bro:type:`Software::Info` records.
## :bro:id:`tracked` variable to store all known pieces of software ## It's indexed with the name of a piece of software such as "Firefox"
## for a particular host. It's indexed with the name of a piece of ## and it yields a :bro:type:`Software::Info` record with more information
## software such as "Firefox" and it yields a ## about the software.
## :bro:type:`Software::Info` record with more information about the
## software.
type SoftwareSet: table[string] of Info; type SoftwareSet: table[string] of Info;
## The set of software associated with an address. Data expires from ## The set of software associated with an address. Data expires from

File diff suppressed because it is too large Load diff

View file

@ -1,10 +1,13 @@
##! This script can be used to extract either the originator's data or the ##! This script can be used to extract either the originator's data or the
##! responders data or both. By default nothing is extracted, and in order ##! responders data or both. By default nothing is extracted, and in order
##! to actually extract data the ``c$extract_orig`` and/or the ##! to actually extract data the ``c$extract_orig`` and/or the
##! ``c$extract_resp`` variable must be set to T. One way to achieve this ##! ``c$extract_resp`` variable must be set to ``T``. One way to achieve this
##! would be to handle the connection_established event elsewhere and set the ##! would be to handle the :bro:id:`connection_established` event elsewhere
##! extract_orig and extract_resp options there. However, there may be trouble ##! and set the ``extract_orig`` and ``extract_resp`` options there.
##! with the timing due the event queue delay. ##! However, there may be trouble with the timing due to event queue delay.
##!
##! .. note::
##!
##! This script does not work well in a cluster context unless it has a ##! This script does not work well in a cluster context unless it has a
##! remotely mounted disk to write the content files to. ##! remotely mounted disk to write the content files to.
@ -13,11 +16,12 @@
module Conn; module Conn;
export { export {
## The prefix given to files as they are opened on disk. ## The prefix given to files containing extracted connections as they are
## opened on disk.
const extraction_prefix = "contents" &redef; const extraction_prefix = "contents" &redef;
## If this variable is set to T, then all contents of all files will be ## If this variable is set to ``T``, then all contents of all connections
## extracted. ## will be extracted.
const default_extract = F &redef; const default_extract = F &redef;
} }

View file

@ -4,7 +4,7 @@
module Conn; module Conn;
export { export {
## Define inactivty timeouts by the service detected being used over ## Define inactivity timeouts by the service detected being used over
## the connection. ## the connection.
const analyzer_inactivity_timeouts: table[AnalyzerTag] of interval = { const analyzer_inactivity_timeouts: table[AnalyzerTag] of interval = {
# For interactive services, allow longer periods of inactivity. # For interactive services, allow longer periods of inactivity.

View file

@ -1,17 +1,33 @@
##! This script manages the tracking/logging of general information regarding
##! TCP, UDP, and ICMP traffic. For UDP and ICMP, "connections" are to
##! be interpreted using flow semantics (sequence of packets from a source
##! host/post to a destination host/port). Further, ICMP "ports" are to
##! be interpreted as the source port meaning the ICMP message type and
##! the destination port being the ICMP message code.
@load base/utils/site @load base/utils/site
module Conn; module Conn;
export { export {
## The connection logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains column fields of the connection log.
type Info: record { type Info: record {
## This is the time of the first packet. ## This is the time of the first packet.
ts: time &log; ts: time &log;
## A unique identifier of a connection.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## The transport layer protocol of the connection.
proto: transport_proto &log; proto: transport_proto &log;
## An identification of an application protocol being sent over the
## the connection.
service: string &log &optional; service: string &log &optional;
## How long the connection lasted. For 3-way or 4-way connection
## tear-downs, this will not include the final ACK.
duration: interval &log &optional; duration: interval &log &optional;
## The number of payload bytes the originator sent. For TCP ## The number of payload bytes the originator sent. For TCP
## this is taken from sequence numbers and might be inaccurate ## this is taken from sequence numbers and might be inaccurate
@ -51,8 +67,8 @@ export {
## have been completed prior to the packet loss. ## have been completed prior to the packet loss.
missed_bytes: count &log &default=0; missed_bytes: count &log &default=0;
## Records the state history of (TCP) connections as ## Records the state history of connections as a string of letters.
## a string of letters. ## For TCP connections the meaning of those letters is:
## ##
## ====== ==================================================== ## ====== ====================================================
## Letter Meaning ## Letter Meaning
@ -71,7 +87,8 @@ export {
## originator and lower case then means the responder. ## originator and lower case then means the responder.
## Also, there is compression. We only record one "d" in each direction, ## Also, there is compression. We only record one "d" in each direction,
## for instance. I.e., we just record that data went in that direction. ## for instance. I.e., we just record that data went in that direction.
## This history is not meant to encode how much data that happened to be. ## This history is not meant to encode how much data that happened to
## be.
history: string &log &optional; history: string &log &optional;
## Number of packets the originator sent. ## Number of packets the originator sent.
## Only set if :bro:id:`use_conn_size_analyzer` = T ## Only set if :bro:id:`use_conn_size_analyzer` = T
@ -86,6 +103,8 @@ export {
resp_ip_bytes: count &log &optional; resp_ip_bytes: count &log &optional;
}; };
## Event that can be handled to access the :bro:type:`Conn::Info`
## record as it is sent on to the logging framework.
global log_conn: event(rec: Info); global log_conn: event(rec: Info);
} }

View file

@ -4,9 +4,9 @@
module DNS; module DNS;
export { export {
const PTR = 12; const PTR = 12; ##< RR TYPE value for a domain name pointer.
const EDNS = 41; const EDNS = 41; ##< An OPT RR TYPE value described by EDNS.
const ANY = 255; const ANY = 255; ##< A QTYPE value describing a request for all records.
## Mapping of DNS query type codes to human readable string representation. ## Mapping of DNS query type codes to human readable string representation.
const query_types = { const query_types = {
@ -29,50 +29,43 @@ export {
[ANY] = "*", [ANY] = "*",
} &default = function(n: count): string { return fmt("query-%d", n); }; } &default = function(n: count): string { return fmt("query-%d", n); };
const code_types = {
[0] = "X0",
[1] = "Xfmt",
[2] = "Xsrv",
[3] = "Xnam",
[4] = "Ximp",
[5] = "X[",
} &default="?";
## Errors used for non-TSIG/EDNS types. ## Errors used for non-TSIG/EDNS types.
const base_errors = { const base_errors = {
[0] = "NOERROR", ##< No Error [0] = "NOERROR", # No Error
[1] = "FORMERR", ##< Format Error [1] = "FORMERR", # Format Error
[2] = "SERVFAIL", ##< Server Failure [2] = "SERVFAIL", # Server Failure
[3] = "NXDOMAIN", ##< Non-Existent Domain [3] = "NXDOMAIN", # Non-Existent Domain
[4] = "NOTIMP", ##< Not Implemented [4] = "NOTIMP", # Not Implemented
[5] = "REFUSED", ##< Query Refused [5] = "REFUSED", # Query Refused
[6] = "YXDOMAIN", ##< Name Exists when it should not [6] = "YXDOMAIN", # Name Exists when it should not
[7] = "YXRRSET", ##< RR Set Exists when it should not [7] = "YXRRSET", # RR Set Exists when it should not
[8] = "NXRRSet", ##< RR Set that should exist does not [8] = "NXRRSet", # RR Set that should exist does not
[9] = "NOTAUTH", ##< Server Not Authoritative for zone [9] = "NOTAUTH", # Server Not Authoritative for zone
[10] = "NOTZONE", ##< Name not contained in zone [10] = "NOTZONE", # Name not contained in zone
[11] = "unassigned-11", ##< available for assignment [11] = "unassigned-11", # available for assignment
[12] = "unassigned-12", ##< available for assignment [12] = "unassigned-12", # available for assignment
[13] = "unassigned-13", ##< available for assignment [13] = "unassigned-13", # available for assignment
[14] = "unassigned-14", ##< available for assignment [14] = "unassigned-14", # available for assignment
[15] = "unassigned-15", ##< available for assignment [15] = "unassigned-15", # available for assignment
[16] = "BADVERS", ##< for EDNS, collision w/ TSIG [16] = "BADVERS", # for EDNS, collision w/ TSIG
[17] = "BADKEY", ##< Key not recognized [17] = "BADKEY", # Key not recognized
[18] = "BADTIME", ##< Signature out of time window [18] = "BADTIME", # Signature out of time window
[19] = "BADMODE", ##< Bad TKEY Mode [19] = "BADMODE", # Bad TKEY Mode
[20] = "BADNAME", ##< Duplicate key name [20] = "BADNAME", # Duplicate key name
[21] = "BADALG", ##< Algorithm not supported [21] = "BADALG", # Algorithm not supported
[22] = "BADTRUNC", ##< draft-ietf-dnsext-tsig-sha-05.txt [22] = "BADTRUNC", # draft-ietf-dnsext-tsig-sha-05.txt
[3842] = "BADSIG", ##< 16 <= number collision with EDNS(16); [3842] = "BADSIG", # 16 <= number collision with EDNS(16);
##< this is a translation from TSIG(16) # this is a translation from TSIG(16)
} &default = function(n: count): string { return fmt("rcode-%d", n); }; } &default = function(n: count): string { return fmt("rcode-%d", n); };
# This deciphers EDNS Z field values. ## This deciphers EDNS Z field values.
const edns_zfield = { const edns_zfield = {
[0] = "NOVALUE", # regular entry [0] = "NOVALUE", # regular entry
[32768] = "DNS_SEC_OK", # accepts DNS Sec RRs [32768] = "DNS_SEC_OK", # accepts DNS Sec RRs
} &default="?"; } &default="?";
## Possible values of the CLASS field in resource records or QCLASS field
## in query messages.
const classes = { const classes = {
[1] = "C_INTERNET", [1] = "C_INTERNET",
[2] = "C_CSNET", [2] = "C_CSNET",

View file

@ -1,38 +1,80 @@
##! Base DNS analysis script which tracks and logs DNS queries along with
##! their responses.
@load ./consts @load ./consts
module DNS; module DNS;
export { export {
## The DNS logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## The record type which contains the column fields of the DNS log.
type Info: record { type Info: record {
## The earliest time at which a DNS protocol message over the
## associated connection is observed.
ts: time &log; ts: time &log;
## A unique identifier of the connection over which DNS messages
## are being transferred.
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## The transport layer protocol of the connection.
proto: transport_proto &log; proto: transport_proto &log;
## A 16 bit identifier assigned by the program that generated the
## DNS query. Also used in responses to match up replies to
## outstanding queries.
trans_id: count &log &optional; trans_id: count &log &optional;
## The domain name that is the subject of the DNS query.
query: string &log &optional; query: string &log &optional;
## The QCLASS value specifying the class of the query.
qclass: count &log &optional; qclass: count &log &optional;
## A descriptive name for the class of the query.
qclass_name: string &log &optional; qclass_name: string &log &optional;
## A QTYPE value specifying the type of the query.
qtype: count &log &optional; qtype: count &log &optional;
## A descriptive name for the type of the query.
qtype_name: string &log &optional; qtype_name: string &log &optional;
## The response code value in DNS response messages.
rcode: count &log &optional; rcode: count &log &optional;
## A descriptive name for the response code value.
rcode_name: string &log &optional; rcode_name: string &log &optional;
## Whether the message is a query (F) or response (T).
QR: bool &log &default=F; QR: bool &log &default=F;
## The Authoritative Answer bit for response messages specifies that
## the responding name server is an authority for the domain name
## in the question section.
AA: bool &log &default=F; AA: bool &log &default=F;
## The Truncation bit specifies that the message was truncated.
TC: bool &log &default=F; TC: bool &log &default=F;
## The Recursion Desired bit indicates to a name server to recursively
## purse the query.
RD: bool &log &default=F; RD: bool &log &default=F;
## The Recursion Available bit in a response message indicates if
## the name server supports recursive queries.
RA: bool &log &default=F; RA: bool &log &default=F;
## A reserved field that is currently supposed to be zero in all
## queries and responses.
Z: count &log &default=0; Z: count &log &default=0;
## The set of resource descriptions in answer of the query.
answers: vector of string &log &optional; answers: vector of string &log &optional;
## The caching intervals of the associated RRs described by the
## ``answers`` field.
TTLs: vector of interval &log &optional; TTLs: vector of interval &log &optional;
## This value indicates if this request/response pair is ready to be logged. ## This value indicates if this request/response pair is ready to be
## logged.
ready: bool &default=F; ready: bool &default=F;
## The total number of resource records in a reply message's answer
## section.
total_answers: count &optional; total_answers: count &optional;
## The total number of resource records in a reply message's answer,
## authority, and additional sections.
total_replies: count &optional; total_replies: count &optional;
}; };
## A record type which tracks the status of DNS queries for a given
## :bro:type:`connection`.
type State: record { type State: record {
## Indexed by query id, returns Info record corresponding to ## Indexed by query id, returns Info record corresponding to
## query/response which haven't completed yet. ## query/response which haven't completed yet.
@ -44,11 +86,21 @@ export {
finished_answers: set[count] &optional; finished_answers: set[count] &optional;
}; };
## An event that can be handled to access the :bro:type:`DNS::Info`
## record as it is sent to the logging framework.
global log_dns: event(rec: Info); global log_dns: event(rec: Info);
## This is called by the specific dns_*_reply events with a "reply" which ## This is called by the specific dns_*_reply events with a "reply" which
## may not represent the full data available from the resource record, but ## may not represent the full data available from the resource record, but
## it's generally considered a summarization of the response(s). ## it's generally considered a summarization of the response(s).
##
## c: The connection record for which to fill in DNS reply data.
##
## msg: The DNS message header information for the response.
##
## ans: The general information of a RR response.
##
## reply: The specific response information according to RR type/class.
global do_reply: event(c: connection, msg: dns_msg, ans: dns_answer, reply: string); global do_reply: event(c: connection, msg: dns_msg, ans: dns_answer, reply: string);
} }

View file

@ -1,4 +1,4 @@
##! File extraction for FTP. ##! File extraction support for FTP.
@load ./main @load ./main
@load base/utils/files @load base/utils/files
@ -6,7 +6,7 @@
module FTP; module FTP;
export { export {
## Pattern of file mime types to extract from FTP entity bodies. ## Pattern of file mime types to extract from FTP transfers.
const extract_file_types = /NO_DEFAULT/ &redef; const extract_file_types = /NO_DEFAULT/ &redef;
## The on-disk prefix for files to be extracted from FTP-data transfers. ## The on-disk prefix for files to be extracted from FTP-data transfers.
@ -14,10 +14,15 @@ export {
} }
redef record Info += { redef record Info += {
## The file handle for the file to be extracted ## On disk file where it was extracted to.
extraction_file: file &log &optional; extraction_file: file &log &optional;
## Indicates if the current command/response pair should attempt to
## extract the file if a file was transferred.
extract_file: bool &default=F; extract_file: bool &default=F;
## Internal tracking of the total number of files extracted during this
## session.
num_extracted_files: count &default=0; num_extracted_files: count &default=0;
}; };
@ -33,7 +38,6 @@ event file_transferred(c: connection, prefix: string, descr: string,
if ( extract_file_types in s$mime_type ) if ( extract_file_types in s$mime_type )
{ {
s$extract_file = T; s$extract_file = T;
add s$tags["extracted_file"];
++s$num_extracted_files; ++s$num_extracted_files;
} }
} }

View file

@ -2,10 +2,6 @@
##! along with metadata. For example, if files are transferred, the argument ##! along with metadata. For example, if files are transferred, the argument
##! will take on the full path that the client is at along with the requested ##! will take on the full path that the client is at along with the requested
##! file name. ##! file name.
##!
##! TODO:
##!
##! * Handle encrypted sessions correctly (get an example?)
@load ./utils-commands @load ./utils-commands
@load base/utils/paths @load base/utils/paths
@ -14,38 +10,64 @@
module FTP; module FTP;
export { export {
## The FTP protocol logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
## List of commands that should have their command/response pairs logged.
const logged_commands = {
"APPE", "DELE", "RETR", "STOR", "STOU", "ACCT"
} &redef;
## This setting changes if passwords used in FTP sessions are captured or not. ## This setting changes if passwords used in FTP sessions are captured or not.
const default_capture_password = F &redef; const default_capture_password = F &redef;
## User IDs that can be considered "anonymous".
const guest_ids = { "anonymous", "ftp", "guest" } &redef;
type Info: record { type Info: record {
## Time when the command was sent.
ts: time &log; ts: time &log;
uid: string &log; uid: string &log;
id: conn_id &log; id: conn_id &log;
## User name for the current FTP session.
user: string &log &default="<unknown>"; user: string &log &default="<unknown>";
## Password for the current FTP session if captured.
password: string &log &optional; password: string &log &optional;
## Command given by the client.
command: string &log &optional; command: string &log &optional;
## Argument for the command if one is given.
arg: string &log &optional; arg: string &log &optional;
## Libmagic "sniffed" file type if the command indicates a file transfer.
mime_type: string &log &optional; mime_type: string &log &optional;
## Libmagic "sniffed" file description if the command indicates a file transfer.
mime_desc: string &log &optional; mime_desc: string &log &optional;
## Size of the file if the command indicates a file transfer.
file_size: count &log &optional; file_size: count &log &optional;
## Reply code from the server in response to the command.
reply_code: count &log &optional; reply_code: count &log &optional;
## Reply message from the server in response to the command.
reply_msg: string &log &optional; reply_msg: string &log &optional;
## Arbitrary tags that may indicate a particular attribute of this command.
tags: set[string] &log &default=set(); tags: set[string] &log &default=set();
## By setting the CWD to '/.', we can indicate that unless something ## Current working directory that this session is in. By making
## the default value '/.', we can indicate that unless something
## more concrete is discovered that the existing but unknown ## more concrete is discovered that the existing but unknown
## directory is ok to use. ## directory is ok to use.
cwd: string &default="/."; cwd: string &default="/.";
## Command that is currently waiting for a response.
cmdarg: CmdArg &optional; cmdarg: CmdArg &optional;
## Queue for commands that have been sent but not yet responded to
## are tracked here.
pending_commands: PendingCmds; pending_commands: PendingCmds;
## This indicates if the session is in active or passive mode. ## Indicates if the session is in active or passive mode.
passive: bool &default=F; passive: bool &default=F;
## This determines if the password will be captured for this request. ## Determines if the password will be captured for this request.
capture_password: bool &default=default_capture_password; capture_password: bool &default=default_capture_password;
}; };
@ -57,21 +79,11 @@ export {
z: count; z: count;
}; };
# TODO: add this back in some form. raise a notice again? ## Parse FTP reply codes into the three constituent single digit values.
#const excessive_filename_len = 250 &redef;
#const excessive_filename_trunc_len = 32 &redef;
## These are user IDs that can be considered "anonymous".
const guest_ids = { "anonymous", "ftp", "guest" } &redef;
## The list of commands that should have their command/response pairs logged.
const logged_commands = {
"APPE", "DELE", "RETR", "STOR", "STOU", "ACCT"
} &redef;
## This function splits FTP reply codes into the three constituent
global parse_ftp_reply_code: function(code: count): ReplyCode; global parse_ftp_reply_code: function(code: count): ReplyCode;
## Event that can be handled to access the :bro:type:`FTP::Info`
## record as it is sent on to the logging framework.
global log_ftp: event(rec: Info); global log_ftp: event(rec: Info);
} }

View file

@ -2,14 +2,22 @@ module FTP;
export { export {
type CmdArg: record { type CmdArg: record {
## Time when the command was sent.
ts: time; ts: time;
## Command.
cmd: string &default="<unknown>"; cmd: string &default="<unknown>";
## Argument for the command if one was given.
arg: string &default=""; arg: string &default="";
## Counter to track how many commands have been executed.
seq: count &default=0; seq: count &default=0;
}; };
## Structure for tracking pending commands in the event that the client
## sends a large number of commands before the server has a chance to
## reply.
type PendingCmds: table[count] of CmdArg; type PendingCmds: table[count] of CmdArg;
## Possible response codes for a wide variety of FTP commands.
const cmd_reply_code: set[string, count] = { const cmd_reply_code: set[string, count] = {
# According to RFC 959 # According to RFC 959
["<init>", [120, 220, 421]], ["<init>", [120, 220, 421]],

View file

@ -8,29 +8,24 @@
module HTTP; module HTTP;
export { export {
## Pattern of file mime types to extract from HTTP entity bodies. ## Pattern of file mime types to extract from HTTP response entity bodies.
const extract_file_types = /NO_DEFAULT/ &redef; const extract_file_types = /NO_DEFAULT/ &redef;
## The on-disk prefix for files to be extracted from HTTP entity bodies. ## The on-disk prefix for files to be extracted from HTTP entity bodies.
const extraction_prefix = "http-item" &redef; const extraction_prefix = "http-item" &redef;
redef record Info += { redef record Info += {
## This field can be set per-connection to determine if the entity body ## On-disk file where the response body was extracted to.
## will be extracted. It must be set to T on or before the first
## entity_body_data event.
extracting_file: bool &default=F;
## This is the holder for the file handle as the file is being written
## to disk.
extraction_file: file &log &optional; extraction_file: file &log &optional;
};
redef record State += { ## Indicates if the response body is to be extracted or not. Must be
entity_bodies: count &default=0; ## set before or by the first :bro:id:`http_entity_data` event for the
## content.
extract_file: bool &default=F;
}; };
} }
event http_entity_data(c: connection, is_orig: bool, length: count, data: string) &priority=5 event http_entity_data(c: connection, is_orig: bool, length: count, data: string) &priority=-5
{ {
# Client body extraction is not currently supported in this script. # Client body extraction is not currently supported in this script.
if ( is_orig ) if ( is_orig )
@ -41,8 +36,12 @@ event http_entity_data(c: connection, is_orig: bool, length: count, data: string
if ( c$http?$mime_type && if ( c$http?$mime_type &&
extract_file_types in c$http$mime_type ) extract_file_types in c$http$mime_type )
{ {
c$http$extracting_file = T; c$http$extract_file = T;
local suffix = fmt("%s_%d.dat", is_orig ? "orig" : "resp", ++c$http_state$entity_bodies); }
if ( c$http$extract_file )
{
local suffix = fmt("%s_%d.dat", is_orig ? "orig" : "resp", c$http_state$current_response);
local fname = generate_extraction_filename(extraction_prefix, c, suffix); local fname = generate_extraction_filename(extraction_prefix, c, suffix);
c$http$extraction_file = open(fname); c$http$extraction_file = open(fname);
@ -50,12 +49,12 @@ event http_entity_data(c: connection, is_orig: bool, length: count, data: string
} }
} }
if ( c$http$extracting_file ) if ( c$http?$extraction_file )
print c$http$extraction_file, data; print c$http$extraction_file, data;
} }
event http_end_entity(c: connection, is_orig: bool) event http_end_entity(c: connection, is_orig: bool)
{ {
if ( c$http$extracting_file ) if ( c$http?$extraction_file )
close(c$http$extraction_file); close(c$http$extraction_file);
} }

View file

@ -11,7 +11,8 @@ export {
}; };
redef record Info += { redef record Info += {
## The MD5 sum for a file transferred over HTTP will be stored here. ## MD5 sum for a file transferred over HTTP calculated from the
## response body.
md5: string &log &optional; md5: string &log &optional;
## This value can be set per-transfer to determine per request ## This value can be set per-transfer to determine per request
@ -19,8 +20,8 @@ export {
## set to T at the time of or before the first chunk of body data. ## set to T at the time of or before the first chunk of body data.
calc_md5: bool &default=F; calc_md5: bool &default=F;
## This boolean value indicates if an MD5 sum is currently being ## Indicates if an MD5 sum is being calculated for the current
## calculated for the current file transfer. ## request/response pair.
calculating_md5: bool &default=F; calculating_md5: bool &default=F;
}; };

View file

@ -1,5 +1,4 @@
##! This script is involved in the identification of file types in HTTP ##! Identification of file types in HTTP response bodies with file content sniffing.
##! response bodies.
@load base/frameworks/signatures @load base/frameworks/signatures
@load base/frameworks/notice @load base/frameworks/notice
@ -15,27 +14,23 @@ module HTTP;
export { export {
redef enum Notice::Type += { redef enum Notice::Type += {
# This notice is thrown when the file extension doesn't ## Indicates when the file extension doesn't seem to match the file contents.
# seem to match the file contents.
Incorrect_File_Type, Incorrect_File_Type,
}; };
redef record Info += { redef record Info += {
## This will record the mime_type identified. ## Mime type of response body identified by content sniffing.
mime_type: string &log &optional; mime_type: string &log &optional;
## This indicates that no data of the current file transfer has been ## Indicates that no data of the current file transfer has been
## seen yet. After the first :bro:id:`http_entity_data` event, it ## seen yet. After the first :bro:id:`http_entity_data` event, it
## will be set to T. ## will be set to F.
first_chunk: bool &default=T; first_chunk: bool &default=T;
}; };
redef enum Tags += { ## Mapping between mime types and regular expressions for URLs
IDENTIFIED_FILE ## The :bro:enum:`HTTP::Incorrect_File_Type` notice is generated if the pattern
}; ## doesn't match the mime type that was discovered.
# Create regexes that *should* in be in the urls for specifics mime types.
# Notices are thrown if the pattern doesn't match the url for the file type.
const mime_types_extensions: table[string] of pattern = { const mime_types_extensions: table[string] of pattern = {
["application/x-dosexec"] = /\.([eE][xX][eE]|[dD][lL][lL])/, ["application/x-dosexec"] = /\.([eE][xX][eE]|[dD][lL][lL])/,
} &redef; } &redef;

View file

@ -1,3 +1,7 @@
##! Implements base functionality for HTTP analysis. The logging model is
##! to log request/response pairs and all relevant metadata together in
##! a single record.
@load base/utils/numbers @load base/utils/numbers
@load base/utils/files @load base/utils/files
@ -8,6 +12,7 @@ export {
## Indicate a type of attack or compromise in the record to be logged. ## Indicate a type of attack or compromise in the record to be logged.
type Tags: enum { type Tags: enum {
## Placeholder.
EMPTY EMPTY
}; };
@ -15,64 +20,69 @@ export {
const default_capture_password = F &redef; const default_capture_password = F &redef;
type Info: record { type Info: record {
## Timestamp for when the request happened.
ts: time &log; ts: time &log;
uid: string &log; uid: string &log;
id: conn_id &log; id: conn_id &log;
## This represents the pipelined depth into the connection of this ## Represents the pipelined depth into the connection of this
## request/response transaction. ## request/response transaction.
trans_depth: count &log; trans_depth: count &log;
## The verb used in the HTTP request (GET, POST, HEAD, etc.). ## Verb used in the HTTP request (GET, POST, HEAD, etc.).
method: string &log &optional; method: string &log &optional;
## The value of the HOST header. ## Value of the HOST header.
host: string &log &optional; host: string &log &optional;
## The URI used in the request. ## URI used in the request.
uri: string &log &optional; uri: string &log &optional;
## The value of the "referer" header. The comment is deliberately ## Value of the "referer" header. The comment is deliberately
## misspelled like the standard declares, but the name used here is ## misspelled like the standard declares, but the name used here is
## "referrer" spelled correctly. ## "referrer" spelled correctly.
referrer: string &log &optional; referrer: string &log &optional;
## The value of the User-Agent header from the client. ## Value of the User-Agent header from the client.
user_agent: string &log &optional; user_agent: string &log &optional;
## The actual uncompressed content size of the data transferred from ## Actual uncompressed content size of the data transferred from
## the client. ## the client.
request_body_len: count &log &default=0; request_body_len: count &log &default=0;
## The actual uncompressed content size of the data transferred from ## Actual uncompressed content size of the data transferred from
## the server. ## the server.
response_body_len: count &log &default=0; response_body_len: count &log &default=0;
## The status code returned by the server. ## Status code returned by the server.
status_code: count &log &optional; status_code: count &log &optional;
## The status message returned by the server. ## Status message returned by the server.
status_msg: string &log &optional; status_msg: string &log &optional;
## The last 1xx informational reply code returned by the server. ## Last seen 1xx informational reply code returned by the server.
info_code: count &log &optional; info_code: count &log &optional;
## The last 1xx informational reply message returned by the server. ## Last seen 1xx informational reply message returned by the server.
info_msg: string &log &optional; info_msg: string &log &optional;
## The filename given in the Content-Disposition header ## Filename given in the Content-Disposition header sent by the server.
## sent by the server.
filename: string &log &optional; filename: string &log &optional;
## This is a set of indicators of various attributes discovered and ## A set of indicators of various attributes discovered and
## related to a particular request/response pair. ## related to a particular request/response pair.
tags: set[Tags] &log; tags: set[Tags] &log;
## The username if basic-auth is performed for the request. ## Username if basic-auth is performed for the request.
username: string &log &optional; username: string &log &optional;
## The password if basic-auth is performed for the request. ## Password if basic-auth is performed for the request.
password: string &log &optional; password: string &log &optional;
## This determines if the password will be captured for this request. ## Determines if the password will be captured for this request.
capture_password: bool &default=default_capture_password; capture_password: bool &default=default_capture_password;
## All of the headers that may indicate if the request was proxied. ## All of the headers that may indicate if the request was proxied.
proxied: set[string] &log &optional; proxied: set[string] &log &optional;
}; };
## Structure to maintain state for an HTTP connection with multiple
## requests and responses.
type State: record { type State: record {
## Pending requests.
pending: table[count] of Info; pending: table[count] of Info;
current_response: count &default=0; ## Current request in the pending queue.
current_request: count &default=0; current_request: count &default=0;
## Current response in the pending queue.
current_response: count &default=0;
}; };
## The list of HTTP headers typically used to indicate a proxied request. ## A list of HTTP headers typically used to indicate proxied requests.
const proxy_headers: set[string] = { const proxy_headers: set[string] = {
"FORWARDED", "FORWARDED",
"X-FORWARDED-FOR", "X-FORWARDED-FOR",
@ -83,6 +93,8 @@ export {
"PROXY-CONNECTION", "PROXY-CONNECTION",
} &redef; } &redef;
## Event that can be handled to access the HTTP record as it is sent on
## to the logging framework.
global log_http: event(rec: Info); global log_http: event(rec: Info);
} }

View file

@ -5,8 +5,31 @@
module HTTP; module HTTP;
export { export {
## Given a string containing a series of key-value pairs separated by "=",
## this function can be used to parse out all of the key names.
##
## data: The raw data, such as a URL or cookie value.
##
## kv_splitter: A regular expression representing the separator between
## key-value pairs.
##
## Returns: A vector of strings containing the keys.
global extract_keys: function(data: string, kv_splitter: pattern): string_vec; global extract_keys: function(data: string, kv_splitter: pattern): string_vec;
## Creates a URL from an :bro:type:`HTTP::Info` record. This should handle
## edge cases such as proxied requests appropriately.
##
## rec: An :bro:type:`HTTP::Info` record.
##
## Returns: A URL, not prefixed by "http://".
global build_url: function(rec: Info): string; global build_url: function(rec: Info): string;
## Creates a URL from an :bro:type:`HTTP::Info` record. This should handle
## edge cases such as proxied requests appropriately.
##
## rec: An :bro:type:`HTTP::Info` record.
##
## Returns: A URL prefixed with "http://".
global build_url_http: function(rec: Info): string; global build_url_http: function(rec: Info): string;
} }

View file

@ -5,8 +5,9 @@
##! but that connection will actually be between B and C which could be ##! but that connection will actually be between B and C which could be
##! analyzed on a different worker. ##! analyzed on a different worker.
##! ##!
##! Example line from IRC server indicating that the DCC SEND is about to start:
##! PRIVMSG my_nick :^ADCC SEND whateverfile.zip 3640061780 1026 41709^A # Example line from IRC server indicating that the DCC SEND is about to start:
# PRIVMSG my_nick :^ADCC SEND whateverfile.zip 3640061780 1026 41709^A
@load ./main @load ./main
@load base/utils/files @load base/utils/files
@ -14,23 +15,24 @@
module IRC; module IRC;
export { export {
redef enum Tag += { EXTRACTED_FILE };
## Pattern of file mime types to extract from IRC DCC file transfers. ## Pattern of file mime types to extract from IRC DCC file transfers.
const extract_file_types = /NO_DEFAULT/ &redef; const extract_file_types = /NO_DEFAULT/ &redef;
## The on-disk prefix for files to be extracted from IRC DCC file transfers. ## On-disk prefix for files to be extracted from IRC DCC file transfers.
const extraction_prefix = "irc-dcc-item" &redef; const extraction_prefix = "irc-dcc-item" &redef;
redef record Info += { redef record Info += {
## DCC filename requested.
dcc_file_name: string &log &optional; dcc_file_name: string &log &optional;
## Size of the DCC transfer as indicated by the sender.
dcc_file_size: count &log &optional; dcc_file_size: count &log &optional;
## Sniffed mime type of the file.
dcc_mime_type: string &log &optional; dcc_mime_type: string &log &optional;
## The file handle for the file to be extracted ## The file handle for the file to be extracted
extraction_file: file &log &optional; extraction_file: file &log &optional;
## A boolean to indicate if the current file transfer should be extraced. ## A boolean to indicate if the current file transfer should be extracted.
extract_file: bool &default=F; extract_file: bool &default=F;
## The count of the number of file that have been extracted during the session. ## The count of the number of file that have been extracted during the session.
@ -54,8 +56,10 @@ event file_transferred(c: connection, prefix: string, descr: string,
if ( extract_file_types == irc$dcc_mime_type ) if ( extract_file_types == irc$dcc_mime_type )
{ {
irc$extract_file = T; irc$extract_file = T;
add irc$tags[EXTRACTED_FILE]; }
if ( irc$extract_file )
{
local suffix = fmt("%d.dat", ++irc$num_extracted_files); local suffix = fmt("%d.dat", ++irc$num_extracted_files);
local fname = generate_extraction_filename(extraction_prefix, c, suffix); local fname = generate_extraction_filename(extraction_prefix, c, suffix);
irc$extraction_file = open(fname); irc$extraction_file = open(fname);
@ -76,7 +80,7 @@ event file_transferred(c: connection, prefix: string, descr: string,
Log::write(IRC::LOG, irc); Log::write(IRC::LOG, irc);
irc$command = tmp; irc$command = tmp;
if ( irc$extract_file && irc?$extraction_file ) if ( irc?$extraction_file )
set_contents_file(id, CONTENTS_RESP, irc$extraction_file); set_contents_file(id, CONTENTS_RESP, irc$extraction_file);
# Delete these values in case another DCC transfer # Delete these values in case another DCC transfer

View file

@ -1,36 +1,38 @@
##! This is the script that implements the core IRC analysis support. It only ##! Implements the core IRC analysis support. The logging model is to log
##! logs a very limited subset of the IRC protocol by default. The points ##! IRC commands along with the associated response and some additional
##! that it logs at are NICK commands, USER commands, and JOIN commands. It ##! metadata about the connection if it's available.
##! log various bits of meta data as indicated in the :bro:type:`Info` record
##! along with the command at the command arguments.
module IRC; module IRC;
export { export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
type Tag: enum {
EMPTY
};
type Info: record { type Info: record {
## Timestamp when the command was seen.
ts: time &log; ts: time &log;
uid: string &log; uid: string &log;
id: conn_id &log; id: conn_id &log;
## Nick name given for the connection.
nick: string &log &optional; nick: string &log &optional;
## User name given for the connection.
user: string &log &optional; user: string &log &optional;
channels: set[string] &log &optional;
## Command given by the client.
command: string &log &optional; command: string &log &optional;
## Value for the command given by the client.
value: string &log &optional; value: string &log &optional;
## Any additional data for the command.
addl: string &log &optional; addl: string &log &optional;
tags: set[Tag] &log;
}; };
## Event that can be handled to access the IRC record as it is sent on
## to the logging framework.
global irc_log: event(rec: Info); global irc_log: event(rec: Info);
} }
redef record connection += { redef record connection += {
## IRC session information.
irc: Info &optional; irc: Info &optional;
}; };

View file

@ -14,15 +14,17 @@
module SSH; module SSH;
export { export {
## The SSH protocol logging stream identifier.
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
redef enum Notice::Type += { redef enum Notice::Type += {
## This indicates that a heuristically detected "successful" SSH ## Indicates that a heuristically detected "successful" SSH
## authentication occurred. ## authentication occurred.
Login Login
}; };
type Info: record { type Info: record {
## Time when the SSH connection began.
ts: time &log; ts: time &log;
uid: string &log; uid: string &log;
id: conn_id &log; id: conn_id &log;
@ -34,11 +36,11 @@ export {
## would be set for the opposite situation. ## would be set for the opposite situation.
# TODO: handle local-local and remote-remote better. # TODO: handle local-local and remote-remote better.
direction: Direction &log &optional; direction: Direction &log &optional;
## The software string given by the client. ## Software string given by the client.
client: string &log &optional; client: string &log &optional;
## The software string given by the server. ## Software string given by the server.
server: string &log &optional; server: string &log &optional;
## The amount of data returned from the server. This is currently ## Amount of data returned from the server. This is currently
## the only measure of the success heuristic and it is logged to ## the only measure of the success heuristic and it is logged to
## assist analysts looking at the logs to make their own determination ## assist analysts looking at the logs to make their own determination
## about the success on a case-by-case basis. ## about the success on a case-by-case basis.
@ -48,8 +50,8 @@ export {
done: bool &default=F; done: bool &default=F;
}; };
## The size in bytes at which the SSH connection is presumed to be ## The size in bytes of data sent by the server at which the SSH
## successful. ## connection is presumed to be successful.
const authentication_data_size = 5500 &redef; const authentication_data_size = 5500 &redef;
## If true, we tell the event engine to not look at further data ## If true, we tell the event engine to not look at further data
@ -58,14 +60,16 @@ export {
## kinds of analyses (e.g., tracking connection size). ## kinds of analyses (e.g., tracking connection size).
const skip_processing_after_detection = F &redef; const skip_processing_after_detection = F &redef;
## This event is generated when the heuristic thinks that a login ## Event that is generated when the heuristic thinks that a login
## was successful. ## was successful.
global heuristic_successful_login: event(c: connection); global heuristic_successful_login: event(c: connection);
## This event is generated when the heuristic thinks that a login ## Event that is generated when the heuristic thinks that a login
## failed. ## failed.
global heuristic_failed_login: event(c: connection); global heuristic_failed_login: event(c: connection);
## Event that can be handled to access the :bro:type:`SSH::Info`
## record as it is sent on to the logging framework.
global log_ssh: event(rec: Info); global log_ssh: event(rec: Info);
} }

View file

@ -1,23 +1,29 @@
module SSL; module SSL;
export { export {
const SSLv2 = 0x0002; const SSLv2 = 0x0002;
const SSLv3 = 0x0300; const SSLv3 = 0x0300;
const TLSv10 = 0x0301; const TLSv10 = 0x0301;
const TLSv11 = 0x0302; const TLSv11 = 0x0302;
const TLSv12 = 0x0303;
## Mapping between the constants and string values for SSL/TLS versions.
const version_strings: table[count] of string = { const version_strings: table[count] of string = {
[SSLv2] = "SSLv2", [SSLv2] = "SSLv2",
[SSLv3] = "SSLv3", [SSLv3] = "SSLv3",
[TLSv10] = "TLSv10", [TLSv10] = "TLSv10",
[TLSv11] = "TLSv11", [TLSv11] = "TLSv11",
[TLSv12] = "TLSv12",
} &default="UNKNOWN"; } &default="UNKNOWN";
## Mapping between numeric codes and human readable strings for alert
## levels.
const alert_levels: table[count] of string = { const alert_levels: table[count] of string = {
[1] = "warning", [1] = "warning",
[2] = "fatal", [2] = "fatal",
} &default=function(i: count):string { return fmt("unknown-%d", i); }; } &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable strings for alert
## descriptions..
const alert_descriptions: table[count] of string = { const alert_descriptions: table[count] of string = {
[0] = "close_notify", [0] = "close_notify",
[10] = "unexpected_message", [10] = "unexpected_message",
@ -51,6 +57,9 @@ export {
[115] = "unknown_psk_identity", [115] = "unknown_psk_identity",
} &default=function(i: count):string { return fmt("unknown-%d", i); }; } &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable strings for SSL/TLS
## extensions.
# More information can be found here:
# http://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xml # http://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xml
const extensions: table[count] of string = { const extensions: table[count] of string = {
[0] = "server_name", [0] = "server_name",
@ -69,10 +78,11 @@ export {
[13] = "signature_algorithms", [13] = "signature_algorithms",
[14] = "use_srtp", [14] = "use_srtp",
[35] = "SessionTicket TLS", [35] = "SessionTicket TLS",
[13172] = "next_protocol_negotiation",
[65281] = "renegotiation_info" [65281] = "renegotiation_info"
} &default=function(i: count):string { return fmt("unknown-%d", i); }; } &default=function(i: count):string { return fmt("unknown-%d", i); };
## SSLv2 # SSLv2
const SSLv20_CK_RC4_128_WITH_MD5 = 0x010080; const SSLv20_CK_RC4_128_WITH_MD5 = 0x010080;
const SSLv20_CK_RC4_128_EXPORT40_WITH_MD5 = 0x020080; const SSLv20_CK_RC4_128_EXPORT40_WITH_MD5 = 0x020080;
const SSLv20_CK_RC2_128_CBC_WITH_MD5 = 0x030080; const SSLv20_CK_RC2_128_CBC_WITH_MD5 = 0x030080;
@ -81,7 +91,7 @@ export {
const SSLv20_CK_DES_64_CBC_WITH_MD5 = 0x060040; const SSLv20_CK_DES_64_CBC_WITH_MD5 = 0x060040;
const SSLv20_CK_DES_192_EDE3_CBC_WITH_MD5 = 0x0700C0; const SSLv20_CK_DES_192_EDE3_CBC_WITH_MD5 = 0x0700C0;
## TLS # TLS
const TLS_NULL_WITH_NULL_NULL = 0x0000; const TLS_NULL_WITH_NULL_NULL = 0x0000;
const TLS_RSA_WITH_NULL_MD5 = 0x0001; const TLS_RSA_WITH_NULL_MD5 = 0x0001;
const TLS_RSA_WITH_NULL_SHA = 0x0002; const TLS_RSA_WITH_NULL_SHA = 0x0002;
@ -299,12 +309,10 @@ export {
const SSL_RSA_WITH_3DES_EDE_CBC_MD5 = 0xFF83; const SSL_RSA_WITH_3DES_EDE_CBC_MD5 = 0xFF83;
const TLS_EMPTY_RENEGOTIATION_INFO_SCSV = 0x00FF; const TLS_EMPTY_RENEGOTIATION_INFO_SCSV = 0x00FF;
# --- This is a table of all known cipher specs. ## This is a table of all known cipher specs. It can be used for
# --- It can be used for detecting unknown ciphers and for ## detecting unknown ciphers and for converting the cipher spec constants
# --- converting the cipher spec constants into a human readable format. ## into a human readable format.
const cipher_desc: table[count] of string = { const cipher_desc: table[count] of string = {
# --- sslv20 ---
[SSLv20_CK_RC4_128_EXPORT40_WITH_MD5] = [SSLv20_CK_RC4_128_EXPORT40_WITH_MD5] =
"SSLv20_CK_RC4_128_EXPORT40_WITH_MD5", "SSLv20_CK_RC4_128_EXPORT40_WITH_MD5",
[SSLv20_CK_RC4_128_WITH_MD5] = "SSLv20_CK_RC4_128_WITH_MD5", [SSLv20_CK_RC4_128_WITH_MD5] = "SSLv20_CK_RC4_128_WITH_MD5",
@ -316,7 +324,6 @@ export {
"SSLv20_CK_DES_192_EDE3_CBC_WITH_MD5", "SSLv20_CK_DES_192_EDE3_CBC_WITH_MD5",
[SSLv20_CK_DES_64_CBC_WITH_MD5] = "SSLv20_CK_DES_64_CBC_WITH_MD5", [SSLv20_CK_DES_64_CBC_WITH_MD5] = "SSLv20_CK_DES_64_CBC_WITH_MD5",
# --- TLS ---
[TLS_NULL_WITH_NULL_NULL] = "TLS_NULL_WITH_NULL_NULL", [TLS_NULL_WITH_NULL_NULL] = "TLS_NULL_WITH_NULL_NULL",
[TLS_RSA_WITH_NULL_MD5] = "TLS_RSA_WITH_NULL_MD5", [TLS_RSA_WITH_NULL_MD5] = "TLS_RSA_WITH_NULL_MD5",
[TLS_RSA_WITH_NULL_SHA] = "TLS_RSA_WITH_NULL_SHA", [TLS_RSA_WITH_NULL_SHA] = "TLS_RSA_WITH_NULL_SHA",
@ -530,6 +537,7 @@ export {
[SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA_2] = "SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA_2", [SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA_2] = "SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA_2",
} &default="UNKNOWN"; } &default="UNKNOWN";
## Mapping between the constants and string values for SSL/TLS errors.
const x509_errors: table[count] of string = { const x509_errors: table[count] of string = {
[0] = "ok", [0] = "ok",
[1] = "unable to get issuer cert", [1] = "unable to get issuer cert",

View file

@ -1,3 +1,6 @@
##! Base SSL analysis script. This script logs information about the SSL/TLS
##! handshaking and encryption establishment process.
@load ./consts @load ./consts
module SSL; module SSL;
@ -6,28 +9,41 @@ export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
type Info: record { type Info: record {
## Time when the SSL connection began.
ts: time &log; ts: time &log;
uid: string &log; uid: string &log;
id: conn_id &log; id: conn_id &log;
## SSL/TLS version the server offered.
version: string &log &optional; version: string &log &optional;
## SSL/TLS cipher suite the server chose.
cipher: string &log &optional; cipher: string &log &optional;
## Value of the Server Name Indicator SSL/TLS extension. It
## indicates the server name that the client was requesting.
server_name: string &log &optional; server_name: string &log &optional;
## Session ID offered by the client for session resumption.
session_id: string &log &optional; session_id: string &log &optional;
## Subject of the X.509 certificate offered by the server.
subject: string &log &optional; subject: string &log &optional;
## NotValidBefore field value from the server certificate.
not_valid_before: time &log &optional; not_valid_before: time &log &optional;
## NotValidAfter field value from the serve certificate.
not_valid_after: time &log &optional; not_valid_after: time &log &optional;
## Last alert that was seen during the connection.
last_alert: string &log &optional; last_alert: string &log &optional;
## Full binary server certificate stored in DER format.
cert: string &optional; cert: string &optional;
## Chain of certificates offered by the server to validate its
## complete signing chain.
cert_chain: vector of string &optional; cert_chain: vector of string &optional;
## This stores the analyzer id used for the analyzer instance attached ## The analyzer ID used for the analyzer instance attached
## to each connection. It is not used for logging since it's a ## to each connection. It is not used for logging since it's a
## meaningless arbitrary number. ## meaningless arbitrary number.
analyzer_id: count &optional; analyzer_id: count &optional;
}; };
## This is where the default root CA bundle is defined. By loading the ## The default root CA bundle. By loading the
## mozilla-ca-list.bro script it will be set to Mozilla's root CA list. ## mozilla-ca-list.bro script it will be set to Mozilla's root CA list.
const root_certs: table[string] of string = {} &redef; const root_certs: table[string] of string = {} &redef;
@ -41,12 +57,9 @@ export {
## utility. ## utility.
const openssl_util = "openssl" &redef; const openssl_util = "openssl" &redef;
## Event that can be handled to access the SSL
## record as it is sent on to the logging framework.
global log_ssl: event(rec: Info); global log_ssl: event(rec: Info);
const ports = {
443/tcp, 563/tcp, 585/tcp, 614/tcp, 636/tcp,
989/tcp, 990/tcp, 992/tcp, 993/tcp, 995/tcp, 5223/tcp
} &redef;
} }
redef record connection += { redef record connection += {
@ -73,6 +86,11 @@ redef capture_filters += {
["xmpps"] = "tcp port 5223", ["xmpps"] = "tcp port 5223",
}; };
const ports = {
443/tcp, 563/tcp, 585/tcp, 614/tcp, 636/tcp,
989/tcp, 990/tcp, 992/tcp, 993/tcp, 995/tcp, 5223/tcp
};
redef dpd_config += { redef dpd_config += {
[[ANALYZER_SSL]] = [$ports = ports] [[ANALYZER_SSL]] = [$ports = ports]
}; };

View file

@ -1,6 +1,9 @@
##! Constants definitions for syslog.
module Syslog; module Syslog;
export { export {
## Mapping between the constants and string values for syslog facilities.
const facility_codes: table[count] of string = { const facility_codes: table[count] of string = {
[0] = "KERN", [0] = "KERN",
[1] = "USER", [1] = "USER",
@ -28,6 +31,7 @@ export {
[23] = "LOCAL7", [23] = "LOCAL7",
} &default=function(c: count): string { return fmt("?-%d", c); }; } &default=function(c: count): string { return fmt("?-%d", c); };
## Mapping between the constants and string values for syslog severities.
const severity_codes: table[count] of string = { const severity_codes: table[count] of string = {
[0] = "EMERG", [0] = "EMERG",
[1] = "ALERT", [1] = "ALERT",

View file

@ -1,4 +1,5 @@
##! Core script support for logging syslog messages. ##! Core script support for logging syslog messages. This script represents
##! one syslog message as one logged record.
@load ./consts @load ./consts
@ -8,19 +9,23 @@ export {
redef enum Log::ID += { LOG }; redef enum Log::ID += { LOG };
type Info: record { type Info: record {
## Timestamp of when the syslog message was seen.
ts: time &log; ts: time &log;
uid: string &log; uid: string &log;
id: conn_id &log; id: conn_id &log;
## Protocol over which the message was seen.
proto: transport_proto &log; proto: transport_proto &log;
## Syslog facility for the message.
facility: string &log; facility: string &log;
## Syslog severity for the message.
severity: string &log; severity: string &log;
## The plain text message.
message: string &log; message: string &log;
}; };
const ports = { 514/udp } &redef;
} }
redef capture_filters += { ["syslog"] = "port 514" }; redef capture_filters += { ["syslog"] = "port 514" };
const ports = { 514/udp } &redef;
redef dpd_config += { [ANALYZER_SYSLOG_BINPAC] = [$ports = ports] }; redef dpd_config += { [ANALYZER_SYSLOG_BINPAC] = [$ports = ports] };
redef likely_server_ports += { 514/udp }; redef likely_server_ports += { 514/udp };

View file

@ -18,7 +18,7 @@ export {
const local_nets: set[subnet] &redef; const local_nets: set[subnet] &redef;
## This is used for retrieving the subnet when you multiple ## This is used for retrieving the subnet when you multiple
## :bro:id:`local_nets`. A membership query can be done with an ## :bro:id:`Site::local_nets`. A membership query can be done with an
## :bro:type:`addr` and the table will yield the subnet it was found ## :bro:type:`addr` and the table will yield the subnet it was found
## within. ## within.
global local_nets_table: table[subnet] of subnet = {}; global local_nets_table: table[subnet] of subnet = {};

View file

@ -1,3 +1,12 @@
##! The controllee portion of the control framework. Load this script if remote
##! runtime control of the Bro process is desired.
##!
##! A controllee only needs to load the controllee script in addition
##! to the specific analysis scripts desired. It may also need a node
##! configured as a controller node in the communications nodes configuration::
##!
##! bro <scripts> frameworks/control/controllee
@load base/frameworks/control @load base/frameworks/control
# If an instance is a controllee, it implicitly needs to listen for remote # If an instance is a controllee, it implicitly needs to listen for remote
# connections. # connections.

View file

@ -1,3 +1,10 @@
##! This is a utility script that implements the controller interface for the
##! control framework. It's intended to be run to control a remote Bro
##! and then shutdown.
##!
##! It's intended to be used from the command line like this::
##! bro <scripts> frameworks/control/controller Control::host=<host_addr> Control::port=<host_port> Control::cmd=<command> [Control::arg=<arg>]
@load base/frameworks/control @load base/frameworks/control
@load base/frameworks/communication @load base/frameworks/communication

View file

@ -1,3 +1,6 @@
##! An example of using the metrics framework to collect connection metrics
##! aggregated into /24 CIDR ranges.
@load base/frameworks/metrics @load base/frameworks/metrics
@load base/utils/site @load base/utils/site

View file

@ -1,9 +1,17 @@
##! Provides an example of aggregating and limiting collection down to
##! only local networks. Additionally, the status code for the response from
##! the request is added into the metric.
@load base/frameworks/metrics @load base/frameworks/metrics
@load base/protocols/http @load base/protocols/http
@load base/utils/site @load base/utils/site
redef enum Metrics::ID += { redef enum Metrics::ID += {
## Measures HTTP requests indexed on both the request host and the response
## code from the server.
HTTP_REQUESTS_BY_STATUS_CODE, HTTP_REQUESTS_BY_STATUS_CODE,
## Currently unfinished and not working.
HTTP_REQUESTS_BY_HOST_HEADER, HTTP_REQUESTS_BY_HOST_HEADER,
}; };
@ -11,13 +19,13 @@ event bro_init()
{ {
# TODO: these are waiting on a fix with table vals + records before they will work. # TODO: these are waiting on a fix with table vals + records before they will work.
#Metrics::add_filter(HTTP_REQUESTS_BY_HOST_HEADER, #Metrics::add_filter(HTTP_REQUESTS_BY_HOST_HEADER,
# [$pred(index: Index) = { return Site:is_local_addr(index$host) }, # [$pred(index: Metrics::Index) = { return Site::is_local_addr(index$host); },
# $aggregation_mask=24, # $aggregation_mask=24,
# $break_interval=5mins]); # $break_interval=1min]);
#
## Site::local_nets must be defined in order for this to actually do anything. # Site::local_nets must be defined in order for this to actually do anything.
#Metrics::add_filter(HTTP_REQUESTS_BY_STATUS_CODE, [$aggregation_table=Site::local_nets_table, Metrics::add_filter(HTTP_REQUESTS_BY_STATUS_CODE, [$aggregation_table=Site::local_nets_table,
# $break_interval=5mins]); $break_interval=1min]);
} }
event HTTP::log_http(rec: HTTP::Info) event HTTP::log_http(rec: HTTP::Info)

View file

@ -1,3 +1,8 @@
##! Provides an example of using the metrics framework to collect the number
##! of times a specific server name indicator value is seen in SSL session
##! establishments. Names ending in google.com are being filtered out as an
##! example of the predicate based filtering in metrics filters.
@load base/frameworks/metrics @load base/frameworks/metrics
@load base/protocols/ssl @load base/protocols/ssl

View file

@ -1,3 +1,7 @@
##! Provides the possibly to define software names that are interesting to
##! watch for changes. A notice is generated if software versions change on a
##! host.
@load base/frameworks/notice @load base/frameworks/notice
@load base/frameworks/software @load base/frameworks/software
@ -5,24 +9,17 @@ module Software;
export { export {
redef enum Notice::Type += { redef enum Notice::Type += {
## For certain softwares, a version changing may matter. In that case, ## For certain software, a version changing may matter. In that case,
## this notice will be generated. Software that matters if the version ## this notice will be generated. Software that matters if the version
## changes can be configured with the ## changes can be configured with the
## :bro:id:`Software::interesting_version_changes` variable. ## :bro:id:`Software::interesting_version_changes` variable.
Software_Version_Change, Software_Version_Change,
}; };
## Some software is more interesting when the version changes and this ## Some software is more interesting when the version changes and this is
## a set of all software that should raise a notice when a different ## a set of all software that should raise a notice when a different
## version is seen on a host. ## version is seen on a host.
const interesting_version_changes: set[string] = { const interesting_version_changes: set[string] = { } &redef;
"SSH"
} &redef;
## Some software is more interesting when the version changes and this
## a set of all software that should raise a notice when a different
## version is seen on a host.
const interesting_type_changes: set[string] = {};
} }
event log_software(rec: Info) event log_software(rec: Info)

View file

@ -1,3 +1,7 @@
##! Provides a variable to define vulnerable versions of software and if a
##! a version of that software as old or older than the defined version a
##! notice will be generated.
@load base/frameworks/notice @load base/frameworks/notice
@load base/frameworks/software @load base/frameworks/software
@ -5,6 +9,7 @@ module Software;
export { export {
redef enum Notice::Type += { redef enum Notice::Type += {
## Indicates that a vulnerable version of software was detected.
Vulnerable_Version, Vulnerable_Version,
}; };
@ -18,6 +23,7 @@ event log_software(rec: Info)
if ( rec$name in vulnerable_versions && if ( rec$name in vulnerable_versions &&
cmp_versions(rec$version, vulnerable_versions[rec$name]) <= 0 ) cmp_versions(rec$version, vulnerable_versions[rec$name]) <= 0 )
{ {
NOTICE([$note=Vulnerable_Version, $src=rec$host, $msg=software_fmt(rec)]); NOTICE([$note=Vulnerable_Version, $src=rec$host,
$msg=fmt("A vulnerable version of software was detected: %s", software_fmt(rec))]);
} }
} }

View file

@ -15,7 +15,7 @@ export {
alert: AlertData &log; alert: AlertData &log;
}; };
## This can convert a Barnyard :bro:type:`PacketID` value to a ## This can convert a Barnyard :bro:type:`Barnyard2::PacketID` value to a
## :bro:type:`conn_id` value in the case that you might need to index ## :bro:type:`conn_id` value in the case that you might need to index
## into an existing data structure elsewhere within Bro. ## into an existing data structure elsewhere within Bro.
global pid2cid: function(p: PacketID): conn_id; global pid2cid: function(p: PacketID): conn_id;

View file

@ -42,9 +42,9 @@ export {
const watch_interval = 15mins &redef; const watch_interval = 15mins &redef;
## The percentage of missed data that is considered "too much" ## The percentage of missed data that is considered "too much"
## when the :bro:enum:`Too_Much_Loss` notice should be generated. ## when the :bro:enum:`CaptureLoss::Too_Much_Loss` notice should be
## The value is expressed as a double between 0 and 1 with 1 being ## generated. The value is expressed as a double between 0 and 1 with 1
## 100% ## being 100%
const too_much_loss: double = 0.1 &redef; const too_much_loss: double = 0.1 &redef;
} }

View file

@ -10,7 +10,8 @@ export {
## This event can be generated externally to this script if on-demand ## This event can be generated externally to this script if on-demand
## tracefile rotation is required with the caveat that the script doesn't ## tracefile rotation is required with the caveat that the script doesn't
## currently attempt to get back on schedule automatically and the next ## currently attempt to get back on schedule automatically and the next
## trim will likely won't happen on the :bro:id:`trim_interval`. ## trim will likely won't happen on the
## :bro:id:`TrimTraceFile::trim_interval`.
global go: event(first_trim: bool); global go: event(first_trim: bool);
} }

View file

@ -8,8 +8,10 @@
module Known; module Known;
export { export {
## The known-hosts logging stream identifier.
redef enum Log::ID += { HOSTS_LOG }; redef enum Log::ID += { HOSTS_LOG };
## The record type which contains the column fields of the known-hosts log.
type HostsInfo: record { type HostsInfo: record {
## The timestamp at which the host was detected. ## The timestamp at which the host was detected.
ts: time &log; ts: time &log;
@ -19,7 +21,7 @@ export {
}; };
## The hosts whose existence should be logged and tracked. ## The hosts whose existence should be logged and tracked.
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS ## See :bro:type:`Host` for possible choices.
const host_tracking = LOCAL_HOSTS &redef; const host_tracking = LOCAL_HOSTS &redef;
## The set of all known addresses to store for preventing duplicate ## The set of all known addresses to store for preventing duplicate
@ -29,6 +31,8 @@ export {
## of each individual address is logged each day. ## of each individual address is logged each day.
global known_hosts: set[addr] &create_expire=1day &synchronized &redef; global known_hosts: set[addr] &create_expire=1day &synchronized &redef;
## An event that can be handled to access the :bro:type:`Known::HostsInfo`
## record as it is sent on to the logging framework.
global log_known_hosts: event(rec: HostsInfo); global log_known_hosts: event(rec: HostsInfo);
} }

View file

@ -8,29 +8,41 @@
module Known; module Known;
export { export {
## The known-services logging stream identifier.
redef enum Log::ID += { SERVICES_LOG }; redef enum Log::ID += { SERVICES_LOG };
## The record type which contains the column fields of the known-services
## log.
type ServicesInfo: record { type ServicesInfo: record {
## The time at which the service was detected.
ts: time &log; ts: time &log;
## The host address on which the service is running.
host: addr &log; host: addr &log;
## The port number on which the service is running.
port_num: port &log; port_num: port &log;
## The transport-layer protocol which the service uses.
port_proto: transport_proto &log; port_proto: transport_proto &log;
## A set of protocols that match the service's connection payloads.
service: set[string] &log; service: set[string] &log;
done: bool &default=F;
}; };
## The hosts whose services should be tracked and logged. ## The hosts whose services should be tracked and logged.
## See :bro:type:`Host` for possible choices.
const service_tracking = LOCAL_HOSTS &redef; const service_tracking = LOCAL_HOSTS &redef;
## Tracks the set of daily-detected services for preventing the logging
## of duplicates, but can also be inspected by other scripts for
## different purposes.
global known_services: set[addr, port] &create_expire=1day &synchronized; global known_services: set[addr, port] &create_expire=1day &synchronized;
## Event that can be handled to access the :bro:type:`Known::ServicesInfo`
## record as it is sent on to the logging framework.
global log_known_services: event(rec: ServicesInfo); global log_known_services: event(rec: ServicesInfo);
} }
redef record connection += { redef record connection += {
## This field is to indicate whether or not the processing for detecting # This field is to indicate whether or not the processing for detecting
## and logging the service for this connection is complete. # and logging the service for this connection is complete.
known_services_done: bool &default=F; known_services_done: bool &default=F;
}; };

View file

@ -7,7 +7,7 @@ module FTP;
export { export {
redef enum Notice::Type += { redef enum Notice::Type += {
## This indicates that a successful response to a "SITE EXEC" ## Indicates that a successful response to a "SITE EXEC"
## command/arg pair was seen. ## command/arg pair was seen.
Site_Exec_Success, Site_Exec_Success,
}; };

View file

@ -12,8 +12,10 @@ module FTP;
export { export {
redef enum Software::Type += { redef enum Software::Type += {
FTP_CLIENT, ## Identifier for FTP clients in the software framework.
FTP_SERVER, CLIENT,
## Not currently implemented.
SERVER,
}; };
} }
@ -21,7 +23,7 @@ event ftp_request(c: connection, command: string, arg: string) &priority=4
{ {
if ( command == "CLNT" ) if ( command == "CLNT" )
{ {
local si = Software::parse(arg, c$id$orig_h, FTP_CLIENT); local si = Software::parse(arg, c$id$orig_h, CLIENT);
Software::found(c$id, si); Software::found(c$id, si);
} }
} }

View file

@ -1,8 +1,8 @@
##! This script takes MD5 sums of files transferred over HTTP and checks them with ##! Detect file downloads over HTTP that have MD5 sums matching files in Team
##! Team Cymru's Malware Hash Registry (http://www.team-cymru.org/Services/MHR/). ##! Cymru's Malware Hash Registry (http://www.team-cymru.org/Services/MHR/).
##! By default, not all file transfers will have MD5 sums calculated. Read the ##! By default, not all file transfers will have MD5 sums calculated. Read the
##! documentation for the :doc:base/protocols/http/file-hash.bro script to see how to ##! documentation for the :doc:base/protocols/http/file-hash.bro script to see
##! configure which transfers will have hashes calculated. ##! how to configure which transfers will have hashes calculated.
@load base/frameworks/notice @load base/frameworks/notice
@load base/protocols/http @load base/protocols/http

View file

@ -1,4 +1,4 @@
##! Intelligence based HTTP detections. ##! Intelligence based HTTP detections. Not yet working!
@load base/protocols/http/main @load base/protocols/http/main
@load base/protocols/http/utils @load base/protocols/http/utils

View file

@ -16,7 +16,9 @@ export {
}; };
redef enum Metrics::ID += { redef enum Metrics::ID += {
## Metric to track SQL injection attackers.
SQLI_ATTACKER, SQLI_ATTACKER,
## Metrics to track SQL injection victims.
SQLI_VICTIM, SQLI_VICTIM,
}; };
@ -30,17 +32,17 @@ export {
COOKIE_SQLI, COOKIE_SQLI,
}; };
## This defines the threshold that determines if an SQL injection attack ## Defines the threshold that determines if an SQL injection attack
## is ongoing based on the number of requests that appear to be SQL ## is ongoing based on the number of requests that appear to be SQL
## injection attacks. ## injection attacks.
const sqli_requests_threshold = 50 &redef; const sqli_requests_threshold = 50 &redef;
## Interval at which to watch for the :bro:id:`sqli_requests_threshold` ## Interval at which to watch for the
## variable to be crossed. At the end of each interval the counter is ## :bro:id:`HTTP::sqli_requests_threshold` variable to be crossed.
## reset. ## At the end of each interval the counter is reset.
const sqli_requests_interval = 5min &redef; const sqli_requests_interval = 5min &redef;
## This regular expression is used to match URI based SQL injections ## Regular expression is used to match URI based SQL injections.
const match_sql_injection_uri = const match_sql_injection_uri =
/[\?&][^[:blank:]\x00-\x37\|]+?=[\-[:alnum:]%]+([[:blank:]\x00-\x37]|\/\*.*?\*\/)*['"]?([[:blank:]\x00-\x37]|\/\*.*?\*\/|\)?;)+.*?([hH][aA][vV][iI][nN][gG]|[uU][nN][iI][oO][nN]|[eE][xX][eE][cC]|[sS][eE][lL][eE][cC][tT]|[dD][eE][lL][eE][tT][eE]|[dD][rR][oO][pP]|[dD][eE][cC][lL][aA][rR][eE]|[cC][rR][eE][aA][tT][eE]|[iI][nN][sS][eE][rR][tT])([[:blank:]\x00-\x37]|\/\*.*?\*\/)+/ /[\?&][^[:blank:]\x00-\x37\|]+?=[\-[:alnum:]%]+([[:blank:]\x00-\x37]|\/\*.*?\*\/)*['"]?([[:blank:]\x00-\x37]|\/\*.*?\*\/|\)?;)+.*?([hH][aA][vV][iI][nN][gG]|[uU][nN][iI][oO][nN]|[eE][xX][eE][cC]|[sS][eE][lL][eE][cC][tT]|[dD][eE][lL][eE][tT][eE]|[dD][rR][oO][pP]|[dD][eE][cC][lL][aA][rR][eE]|[cC][rR][eE][aA][tT][eE]|[iI][nN][sS][eE][rR][tT])([[:blank:]\x00-\x37]|\/\*.*?\*\/)+/
| /[\?&][^[:blank:]\x00-\x37\|]+?=[\-0-9%]+([[:blank:]\x00-\x37]|\/\*.*?\*\/)*['"]?([[:blank:]\x00-\x37]|\/\*.*?\*\/|\)?;)+([xX]?[oO][rR]|[nN]?[aA][nN][dD])([[:blank:]\x00-\x37]|\/\*.*?\*\/)+['"]?(([^a-zA-Z&]+)?=|[eE][xX][iI][sS][tT][sS])/ | /[\?&][^[:blank:]\x00-\x37\|]+?=[\-0-9%]+([[:blank:]\x00-\x37]|\/\*.*?\*\/)*['"]?([[:blank:]\x00-\x37]|\/\*.*?\*\/|\)?;)+([xX]?[oO][rR]|[nN]?[aA][nN][dD])([[:blank:]\x00-\x37]|\/\*.*?\*\/)+['"]?(([^a-zA-Z&]+)?=|[eE][xX][iI][sS][tT][sS])/

View file

@ -1,3 +1,5 @@
##! Detect and log web applications through the software framework.
@load base/frameworks/signatures @load base/frameworks/signatures
@load base/frameworks/software @load base/frameworks/software
@load base/protocols/http @load base/protocols/http
@ -10,10 +12,12 @@ redef Signatures::ignored_ids += /^webapp-/;
export { export {
redef enum Software::Type += { redef enum Software::Type += {
## Identifier for web applications in the software framework.
WEB_APPLICATION, WEB_APPLICATION,
}; };
redef record Software::Info += { redef record Software::Info += {
## Most root URL where the software was discovered.
url: string &optional &log; url: string &optional &log;
}; };
} }

View file

@ -1,5 +1,5 @@
##! This script take advantage of a few ways that installed plugin information ##! Detect browser plugins as they leak through requests to Omniture
##! leaks from web browsers. ##! advertising servers.
@load base/protocols/http @load base/protocols/http
@load base/frameworks/software @load base/frameworks/software
@ -13,6 +13,7 @@ export {
}; };
redef enum Software::Type += { redef enum Software::Type += {
## Identifier for browser plugins in the software framework.
BROWSER_PLUGIN BROWSER_PLUGIN
}; };
} }

View file

@ -6,8 +6,11 @@ module HTTP;
export { export {
redef enum Software::Type += { redef enum Software::Type += {
## Identifier for web servers in the software framework.
SERVER, SERVER,
## Identifier for app servers in the software framework.
APPSERVER, APPSERVER,
## Identifier for web browsers in the software framework.
BROWSER, BROWSER,
}; };

View file

@ -1,4 +1,4 @@
##! This script extracts and logs variables from cookies sent by clients ##! Extracts and logs variables names from cookies sent by clients.
@load base/protocols/http/main @load base/protocols/http/main
@load base/protocols/http/utils @load base/protocols/http/utils
@ -6,6 +6,7 @@
module HTTP; module HTTP;
redef record Info += { redef record Info += {
## Variable names extracted from all cookies.
cookie_vars: vector of string &optional &log; cookie_vars: vector of string &optional &log;
}; };

View file

@ -1,10 +1,12 @@
##! This script extracts and logs variables from the requested URI ##! Extracts and log variables from the requested URI in the default HTTP
##! logging stream.
@load base/protocols/http @load base/protocols/http
module HTTP; module HTTP;
redef record Info += { redef record Info += {
## Variable names from the URI.
uri_vars: vector of string &optional &log; uri_vars: vector of string &optional &log;
}; };

View file

@ -1,3 +1,5 @@
##! Detect hosts which are doing password guessing attacks and/or password
##! bruteforcing over SSH.
@load base/protocols/ssh @load base/protocols/ssh
@load base/frameworks/metrics @load base/frameworks/metrics
@ -9,17 +11,17 @@ module SSH;
export { export {
redef enum Notice::Type += { redef enum Notice::Type += {
## Indicates that a host has been identified as crossing the ## Indicates that a host has been identified as crossing the
## :bro:id:`password_guesses_limit` threshold with heuristically ## :bro:id:`SSH::password_guesses_limit` threshold with heuristically
## determined failed logins. ## determined failed logins.
Password_Guessing, Password_Guessing,
## Indicates that a host previously identified as a "password guesser" ## Indicates that a host previously identified as a "password guesser"
## has now had a heuristically successful login attempt. ## has now had a heuristically successful login attempt. This is not
## currently implemented.
Login_By_Password_Guesser, Login_By_Password_Guesser,
}; };
redef enum Metrics::ID += { redef enum Metrics::ID += {
## This metric is to measure failed logins with the hope of detecting ## Metric is to measure failed logins.
## bruteforcing hosts.
FAILED_LOGIN, FAILED_LOGIN,
}; };
@ -37,7 +39,7 @@ export {
## client subnets and the yield value represents server subnets. ## client subnets and the yield value represents server subnets.
const ignore_guessers: table[subnet] of subnet &redef; const ignore_guessers: table[subnet] of subnet &redef;
## Keeps track of hosts identified as guessing passwords. ## Tracks hosts identified as guessing passwords.
global password_guessers: set[addr] global password_guessers: set[addr]
&read_expire=guessing_timeout+1hr &synchronized &redef; &read_expire=guessing_timeout+1hr &synchronized &redef;
} }

View file

@ -1,5 +1,4 @@
##! This implements all of the additional information and geodata detections ##! Geodata based detections for SSH analysis.
##! for SSH analysis.
@load base/frameworks/notice @load base/frameworks/notice
@load base/protocols/ssh @load base/protocols/ssh
@ -19,8 +18,8 @@ export {
remote_location: geo_location &log &optional; remote_location: geo_location &log &optional;
}; };
## The set of countries for which you'd like to throw notices upon ## The set of countries for which you'd like to generate notices upon
## successful login ## successful login.
const watched_countries: set[string] = {"RO"} &redef; const watched_countries: set[string] = {"RO"} &redef;
} }

View file

@ -10,9 +10,9 @@ module SSH;
export { export {
redef enum Notice::Type += { redef enum Notice::Type += {
## Generated if a login originates or responds with a host and the ## Generated if a login originates or responds with a host where the
## reverse hostname lookup resolves to a name matched by the ## reverse hostname lookup resolves to a name matched by the
## :bro:id:`interesting_hostnames` regular expression. ## :bro:id:`SSH::interesting_hostnames` regular expression.
Interesting_Hostname_Login, Interesting_Hostname_Login,
}; };

View file

@ -1,4 +1,4 @@
##! This script extracts SSH client and server information from SSH ##! Extracts SSH client and server information from SSH
##! connections and forwards it to the software framework. ##! connections and forwards it to the software framework.
@load base/frameworks/software @load base/frameworks/software
@ -7,7 +7,9 @@ module SSH;
export { export {
redef enum Software::Type += { redef enum Software::Type += {
## Identifier for SSH clients in the software framework.
SERVER, SERVER,
## Identifier for SSH servers in the software framework.
CLIENT, CLIENT,
}; };
} }

View file

@ -1,4 +1,4 @@
##! This script calculates MD5 sums for server DER formatted certificates. ##! Calculate MD5 sums for server DER formatted certificates.
@load base/protocols/ssl @load base/protocols/ssl
@ -6,6 +6,7 @@ module SSL;
export { export {
redef record Info += { redef record Info += {
## MD5 sum of the raw server certificate.
cert_hash: string &log &optional; cert_hash: string &log &optional;
}; };
} }

View file

@ -1,6 +1,6 @@
##! This script can be used to generate notices when X.509 certificates over ##! Generate notices when X.509 certificates over SSL/TLS are expired or
##! SSL/TLS are expired or going to expire based on the date and time values ##! going to expire soon based on the date and time values stored within the
##! stored within the certificate. ##! certificate.
@load base/protocols/ssl @load base/protocols/ssl
@load base/frameworks/notice @load base/frameworks/notice
@ -24,12 +24,13 @@ export {
## The category of hosts you would like to be notified about which have ## The category of hosts you would like to be notified about which have
## certificates that are going to be expiring soon. By default, these ## certificates that are going to be expiring soon. By default, these
## notices will be suppressed by the notice framework for 1 day. ## notices will be suppressed by the notice framework for 1 day after
## a particular certificate has had a notice generated.
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS ## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS
const notify_certs_expiration = LOCAL_HOSTS &redef; const notify_certs_expiration = LOCAL_HOSTS &redef;
## The time before a certificate is going to expire that you would like to ## The time before a certificate is going to expire that you would like to
## start receiving :bro:enum:`Certificate_Expires_Soon` notices. ## start receiving :bro:enum:`SSL::Certificate_Expires_Soon` notices.
const notify_when_cert_expiring_in = 30days &redef; const notify_when_cert_expiring_in = 30days &redef;
} }

View file

@ -2,7 +2,7 @@
##! after being converted to PEM files. The certificates will be stored in ##! after being converted to PEM files. The certificates will be stored in
##! a single file, one for local certificates and one for remote certificates. ##! a single file, one for local certificates and one for remote certificates.
##! ##!
##! A couple of things to think about with this script:: ##! ..note::
##! ##!
##! - It doesn't work well on a cluster because each worker will write its ##! - It doesn't work well on a cluster because each worker will write its
##! own certificate files and no duplicate checking is done across ##! own certificate files and no duplicate checking is done across
@ -20,15 +20,15 @@
module SSL; module SSL;
export { export {
## Setting to control if host certificates offered by the defined hosts ## Control if host certificates offered by the defined hosts
## will be written to the PEM certificates file. ## will be written to the PEM certificates file.
## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS ## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS
const extract_certs_pem = LOCAL_HOSTS &redef; const extract_certs_pem = LOCAL_HOSTS &redef;
} }
## This is an internally maintained variable to prevent relogging of # This is an internally maintained variable to prevent relogging of
## certificates that have already been seen. It is indexed on an md5 sum of # certificates that have already been seen. It is indexed on an md5 sum of
## the certificate. # the certificate.
global extracted_certs: set[string] = set() &read_expire=1hr &redef; global extracted_certs: set[string] = set() &read_expire=1hr &redef;
event ssl_established(c: connection) &priority=5 event ssl_established(c: connection) &priority=5

View file

@ -1,5 +1,4 @@
##! This script can be used to log information about certificates while ##! Log information about certificates while attempting to avoid duplicate logging.
##! attempting to avoid duplicate logging.
@load base/utils/directions-and-hosts @load base/utils/directions-and-hosts
@load base/protocols/ssl @load base/protocols/ssl
@ -36,6 +35,8 @@ export {
## in the set is for storing the DER formatted certificate's MD5 hash. ## in the set is for storing the DER formatted certificate's MD5 hash.
global certs: set[addr, string] &create_expire=1day &synchronized &redef; global certs: set[addr, string] &create_expire=1day &synchronized &redef;
## Event that can be handled to access the loggable record as it is sent
## on to the logging framework.
global log_known_certs: event(rec: CertsInfo); global log_known_certs: event(rec: CertsInfo);
} }

View file

@ -14,8 +14,7 @@ export {
}; };
redef record Info += { redef record Info += {
## This stores and logs the result of certificate validation for ## Result of certificate validation for this connection.
## this connection.
validation_status: string &log &optional; validation_status: string &log &optional;
}; };

View file

@ -1,9 +1 @@
##! Local site policy loaded only by the manager in a cluster. ##! Local site policy loaded only by the manager if Bro is running as a cluster.
@load base/frameworks/notice
# If you are running a cluster you should define your Notice::policy here
# so that notice processing occurs on the manager.
redef Notice::policy += {
};

View file

@ -1,2 +1 @@
##! Local site policy loaded only by the proxies if Bro is running as a cluster. ##! Local site policy loaded only by the proxies if Bro is running as a cluster.

View file

@ -1,22 +1,29 @@
##! Local site policy. Customize as appropriate. This file will not be ##! Local site policy. Customize as appropriate.
##! overwritten when upgrading or reinstalling. ##!
##! This file will not be overwritten when upgrading or reinstalling!
# Load the script to log which script were loaded during each run # This script logs which scripts were loaded during each run.
@load misc/loaded-scripts @load misc/loaded-scripts
# Apply the default tuning scripts for common tuning settings. # Apply the default tuning scripts for common tuning settings.
@load tuning/defaults @load tuning/defaults
# Vulnerable versions of software to generate notices for when discovered. # Generate notices when vulnerable versions of software are discovered.
# The default is to only monitor software found in the address space defined # The default is to only monitor software found in the address space defined
# as "local". Refer to the software framework's documentation for more # as "local". Refer to the software framework's documentation for more
# information. # information.
@load frameworks/software/vulnerable @load frameworks/software/vulnerable
# Example vulnerable software. This needs to be updated and maintained over
# time as new vulnerabilities are discovered.
redef Software::vulnerable_versions += { redef Software::vulnerable_versions += {
["Flash"] = [$major=10,$minor=2,$minor2=153,$addl="1"], ["Flash"] = [$major=10,$minor=2,$minor2=153,$addl="1"],
["Java"] = [$major=1,$minor=6,$minor2=0,$addl="22"], ["Java"] = [$major=1,$minor=6,$minor2=0,$addl="22"],
}; };
# Detect software changing (e.g. attacker installing hacked SSHD).
@load frameworks/software/version-changes
# This adds signatures to detect cleartext forward and reverse windows shells. # This adds signatures to detect cleartext forward and reverse windows shells.
redef signature_files += "frameworks/signatures/detect-windows-shells.sig"; redef signature_files += "frameworks/signatures/detect-windows-shells.sig";
@ -25,13 +32,15 @@ redef signature_files += "frameworks/signatures/detect-windows-shells.sig";
# redef Notice::policy += { [$action = Notice::ACTION_ALARM, $priority = 0] }; # redef Notice::policy += { [$action = Notice::ACTION_ALARM, $priority = 0] };
# Load all of the scripts that detect software in various protocols. # Load all of the scripts that detect software in various protocols.
@load protocols/http/software
#@load protocols/http/detect-webapps
@load protocols/ftp/software @load protocols/ftp/software
@load protocols/smtp/software @load protocols/smtp/software
@load protocols/ssh/software @load protocols/ssh/software
@load protocols/http/software
# The detect-webapps script could possibly cause performance trouble when
# running on live traffic. Enable it cautiously.
#@load protocols/http/detect-webapps
# Load the script to detect DNS results pointing toward your Site::local_nets # This script detects DNS results pointing toward your Site::local_nets
# where the name is not part of your local DNS zone and is being hosted # where the name is not part of your local DNS zone and is being hosted
# externally. Requires that the Site::local_zones variable is defined. # externally. Requires that the Site::local_zones variable is defined.
@load protocols/dns/detect-external-names @load protocols/dns/detect-external-names
@ -39,15 +48,12 @@ redef signature_files += "frameworks/signatures/detect-windows-shells.sig";
# Script to detect various activity in FTP sessions. # Script to detect various activity in FTP sessions.
@load protocols/ftp/detect @load protocols/ftp/detect
# Detect software changing (e.g. attacker installing hacked SSHD).
@load frameworks/software/version-changes
# Scripts that do asset tracking. # Scripts that do asset tracking.
@load protocols/conn/known-hosts @load protocols/conn/known-hosts
@load protocols/conn/known-services @load protocols/conn/known-services
@load protocols/ssl/known-certs @load protocols/ssl/known-certs
# Load the script to enable SSL/TLS certificate validation. # This script enables SSL/TLS certificate validation.
@load protocols/ssl/validate-certs @load protocols/ssl/validate-certs
# If you have libGeoIP support built in, do some geographic detections and # If you have libGeoIP support built in, do some geographic detections and
@ -60,5 +66,5 @@ redef signature_files += "frameworks/signatures/detect-windows-shells.sig";
# Detect MD5 sums in Team Cymru's Malware Hash Registry. # Detect MD5 sums in Team Cymru's Malware Hash Registry.
@load protocols/http/detect-MHR @load protocols/http/detect-MHR
# Detect SQL injection attacks # Detect SQL injection attacks.
@load protocols/http/detect-sqli @load protocols/http/detect-sqli

Some files were not shown because too many files have changed in this diff Show more