1. eyeCon 4.2.1 to eyeCon 5.0 Upgrade

The following steps will upgrade you from HAWK eyeCon 4.2.1 to eyeCon 5.0.

Warning

If upgrading from earlier version than 4.2.1 you will need to perform those upgrades first.

1.1. Updating HAWK repo

It will be required to update the HAWK repo for each server.

  1. Update the HAWK repository for CentOS/RHEL:

user@host#: rpm -hUv http://www.hawkdefense.com/repos/hawk/5.0/RHEL6/noarch/hawk-repo-5.0.3-1.el6.noarch.rpm

  1. Update the HAWK repository for HAWKOSv4:

user@host#: rpm -hUv http://www.hawkdefense.com/repos/hawk/5.0/HAWK4/noarch/hawk-repo-5.0.3-1.hwk4.noarch.rpm

  1. Clean yum database.

user@host#: yum clean all

  1. Update the packages:

user@host#: yum -y update

1.2. Updating Data Tier

Several new processes are included in the HAWK eyeCon 5.0.

1.3. vStream Configuration

1.3.1. vStream Distributed Coordination Server/Client Services

vStream provides an distributed configuration service, synchronization service, and naming registry for large distributed systems.

It is required to configure three zookeeper installations. Perform the following steps on each server.

  1. Edit the zookeeper configuration file.

user@host#: vi /etc/zookeeper/zoo.cfg

  1. Append the following at the bottom of the file. (Using IP address or Hostname)

server.1=<Server1.IP.Adress>:2888:3888
server.2=<Server2.IP.Adress>:2888:3888
server.3=<Server3.IP.Adress>:2888:3888
  1. Set the server id. Changing the # below to 1 for server 1, 2 for server 2, and 3 for server 3.

user@host#: echo # > /var/lib/zookeeper/data/myid

  1. Start the zookeeper service.

user@host#: service zookeeper start

Note

user@host#: tail -F /var/log/zookeeper/zookeeper.log to ensure zookeeper is running as expected.

1.3.2. vStream Distributed Commit Log Service

vStream is publish-subscribe messaging rethought as a distributed commit log.

It is required to configure kafka on each data tier server.

  1. Edit the kafka configuration file.

user@host#: vi /opt/kafka/config/server.properties

  1. Find and change the following items.

Find and change the broker id to a unique number starting from 0.

broker.id=0

Find and change the logs directory.

log.dirs=/data/vstream-logs-data01

Find and change the zookeeper connection string. Replacing the Server with correct IP address or Hostname from the zookeeper configuration steps above.

zookeeper.connect=<Server1.IP.Adress>:2181,<Server2.IP.Adress>:2181,<Server3.IP.Adress>:2181

  1. Make the logs directory and set the correct ownership.

user@host#: mkdir /data/vstream-logs-data01

user@host#: chown -R kafka:kafka /data/vstream-logs-data01

  1. Start the kafka service.

user@host#: service kafka start

  1. Repeat steps 1 through 4 for each data tier server.

Note

user@host#: tail -F /var/log/kafka/server.out to ensure kafka is running as expected.

1.3.3. Configure hawk-data

  1. Edit the hawk-data config.php configuration file. Need to edit the $CONFIG[‘MONGO_EVENTS’]

user@host#: vi /var/www/hawk-data/htdocs/API/1.1/config.php

Changing the date to the current date.

$CONFIG['MONGO_EVENTS'] = array(
        //! for those upgrading from 4.2
        '1970-01-01 00:00:00' => array('database' => 'hawk', 'collection' => 'events', 'version' => 1 )
        '<YYYY-MM-DD 00:00:00>' => array('database' => 'hawk2', 'collection' => 'events', 'version' => 2 )

        //! 4.4 settings
        //'1970-01-01 00:00:00' => array('database' => 'hawk2', 'collection' => 'events', 'version' => 2 )
);

1.3.4. Configure hawk-msgd

It is necessary to install and configure hawk-msgd on each data tier server. hawk-msgd is responsible for reading from vStream and writing it deep storage or archive location.

  1. Install hawk-msgd.

user@host#: yum install hawk-msgd -y

  1. Edit the hawk-msgd configuration file. Ensure the shard name represents the shard name that is installed on the server you are configuring.

user@host#: vi /opt/hawk/etc/hawk-msgd.cfg

[API]
server = localhost
username = admin
password = password

[SETTINGS]
shard_name = data01
smtp_from = no-reply@hawkdefense.com
insecure = true
timeout = 900
retry = 3
queue_size = 25000
sink_threads = 4
write_buffer_max = 8000
short_names = true
send_emails = true
sink_mongodb = false
sink_archive = false
storage_path = /data
storage_limit = 95%
admins = [ [ "HAWK Administrator", "[email protected]" ] ]

[MSGQ]
zookeeper = server1:2181,server2:2181,server3:2181
server = server1:9092
group = hawk.events

[MEMCACHE]
server = 127.0.0.1:11211

[MONGODB]
server = server1:27001,server2:27001,server3:27001
username = hawk
password = password
database = hawk
collection = events

[ARCHIVE]
path = /archives
[API]
server:

Enter the IP address or Hostname where HAWK API is installed.

username:

Enter the username of a service account created in the HAWK UX.

password:

Enter the password for the service account.

[SETTINGS]
shard_name:

Enter the Mongo shard name that this hawk-msgd will be servicing.

smtp_from:

Enter the e-mail address you want notification to be sent from.

insecure:

Set to True if you are using a self-signed certificate.

timeout:

Connection time out in milliseconds.

retry:

Connection retry limit.

queue_size:

Amount of events to queue.

sink_threads:

Amount of threads to use when sinking to backend Tier 2 storage (MongoDB) or Archive.

write_buffer_max:

Amount of events to send to backend Tier 2 storage (MongoDB) per batch.

short_names:

Enable short names in event data. (Don’t change unless instructed to do so by HAWK Support)

send_emails:

Enable or Disable hawk-msgd sending notifications for Incidents and full disk warnings.

sink_mongodb:

Enable or Disable hawk-msgd storing events in Tier 2 storage (MongoDB).

sink_archive:

Enable or Disable hawk-msgd archiving events to configured archive location.

storage_path:

Mount point location for hawk-msgd to monitor disk utilization.

storage_limit:

Max disk usage limit percentage. Once reached hawk-msgd will send notifications to the configured admins.

admins:

List of admins that should be notified when storage limit is reached.

[MSGQ]
zookeeper:

Enter three IP addresses or Hostnames separated by a comma ‘,’ where zookeeper is installed.

server:

If zookeeper isn’t being utilized. Enter each of the vStream (kafka) nodes IP addresses or Hostnames separated by a comma ‘,’. If zookeeper is being utilized comment this like out using a # at the beginning of the server line.

group:

Group name being used for vStream.

[MEMCACHE]
server:

Enter the Memcached server IP address or Hostname. If using a memcached pool separate IP addresses or Hostnames by a comma ‘,’.

[MONGODB]
server:

Enter three Mongo shard members IP addresses or Hostnames separated by a comma ‘,’ that match the configured shard name.

username:

Enter the username that is configured for MongoDB authentication.

password:

Enter the password for the username this is configured for MongoDB authentication.

database:

The name of the database being used in MongoDB.

collection:

The name of the collection in the MongoDB database that stores the event information.

[ARCHIVE]
path:

Enter the path for archive data to be stored in.

Note

tail -F /var/log/hawk/hawk-msgd to ensure hawk-msgd is running as expected.

1.3.5. Configure hawk-producerd

These steps will need to be performed on each HAWK Data Tier server.

  1. Edit the hawk-producerd configuration file.

user@host#: vi /opt/hawk/etc/hawk-producerd.cfg

#!HAWK
#
# Hawk Balancer Configuration File
# .sample hawk-producerd.cfg file
#

# User Definition
User="root"
Group="root"


# Authenticated Data Store
HAWKUrl="https://<USERNAME>:<PASSWORD>@<HAWK-DATA-Server>:8080/API/1.1"

# SSL Configuration
# Toggle SSL Peer Verification
HTTPSSLVerifyPeer="False"
HTTPSSLVerifyHost="False"


# The name of this shard, needed to match up with whats assigned in the UX
Tier1ShardName="vstream-data01"
Tier1ShardURL="vstream://<Server1.IP.Adress>:2181,<Server2.IP.Adress>:2181,<Server3.IP.Adress>:2181"

# The name of this shard, needed to match up with whats assigned in the UX
Tier2ShardName="<Mongo.Shard.Name>"
Tier2ShardURL="mongodb://<IP.Adress>:27001,<IP.Adress>:27001,<IP.Adress>:27001:<Mongo.Username>@<Mongo.Password>/hawk"
Tier2ShardTable="events"


# Delays in milliseconds between sends, -1 is disabled
SendDelay="-1"

#
# Verbosity - Verbosity of our Engine
#       0 - Only log errors, and warning
#       1 - include the above plus information
#       2 - include the above plus debugging
#
Verbosity="1"

# LogSource
#       LOGFILE
#       SYSLOG
LogSource="Logfile"
LogFile="/var/log/hawk/hawk-producerd.log"
User:

System user hawk-producerd should run as.

Group:

System group hawk-producerd should run as.

HAWKUrl:

Connection string to connect to HAWK API. Username and Password should be the service account you setup in the HAWK UX. Also, included the IP address or Hostname of the HAWK API.

HTTPSSLVerifyPeer:

Set to False if using a self-signed SSL certificate.

HTTPSSLVerifyHost:

Set to False if using a self-signed SSL certificate.

Tier1ShardName:

Shard name assigned to this server for vStream.

Tier1ShardURL:

Server IP address or Hostnames of vStream zookeeper servers.

Tier2ShardName:

Shard name assigned to this server for MongoDB.

Tier2ShardTable:

The name of the collection in the MongoDB database that stores the event information.

SendDelay:

Delay in milliseconds between sends, -1 is disabled.

Verbosity:

Provide the requested verbosity threshold to increase or decrease the volume of log output.

LogSource:

Specify the logging output option, Examples: LogFile or Syslog are both available options

LogFile:

Specify the destination output log file for logging if LogSource has been specified as LogFile.

1.3.6. Configure hawk-eventd

hawk-eventd should be configured any location hawk-data (API) is configured.

  1. Edit the hawk-eventd configuration file.

user@host#: vi /opt/hawk/etc/hawk-eventd.cfg

#!HAWK
#
# Hawk Event Daemon Configuration File

User="root"
Group="root"

HAWKUrl="https://<USERNAME>:<PASSWORD>@<HAWK-DATA>:8080/API/1.1"

HTTPSSLVerifyPeer="False"
HTTPSSLVerifyHost="False"

Mode="Messages"
# Messaging
Zookeeper="<Server1.IP.Adress>:2181,<Server2.IP.Adress>:2181,<Server3.IP.Adress>:2181"
WriteDiskTarget="/var/www/hawk-data/logs"
# System Configuration
QueueThreadCount=4
Verbosity="1"
User:

System user hawk-eventd should run as.

Group:

System group hawk-eventd should run as.

HAWKUrl:

Connection string to connect to HAWK API. Username and Password should be the service account you setup in the HAWK UX. Also, included the IP address or Hostname of the HAWK API.

HTTPSSLVerifyPeer:

Set to False if using a self-signed SSL certificate.

HTTPSSLVerifyHost:

Set to False if using a self-signed SSL certificate.

Mode:

Enter the mode hawk-eventd should run as. On data tier mode should be set to “Messages”. If on engine tier mode should be set to “HTTP”

Zookeeper:

Enter three IP addresses or Hostnames separated by a comma ‘,’ where zookeeper is installed.

WriteDiskTarget:

Location of event files saved by HAWK API.

QueueThreadCount:

Amount of threads to be used to process event files.

Verbosity:

Provide the requested verbosity threshold to increase or decrease the volume of log output.

1.3.7. Configure hawk-balancerd

HAWK Event Balancer Daemon is a program for evenly distributing events using the LRU (Least recently used) balancing algorithm.

  1. Edit the hawk-balancerd configuration file.

user@host#: vi /opt/hawk/etc/hawk-balancerd.cfg

#!HAWK
#
# Hawk Balancer Configuration File
# .sample hawk-balancer.cfg file
#

# User Definition
User="root"
Group="root"

# Local ip and port to bind to for receiving messages for brokering
BindHost="tcp://0.0.0.0:40010"

#
# Verbosity - Verbosity of our Engine
#       0 - Only log errors, and warning
#       1 - include the above plus information
#       2 - include the above plus debugging
#
Verbosity="1"

# LogSource
#       LOGFILE
#       SYSLOG
LogSource="Logfile"
LogFile="/var/log/hawk/hawk-balancerd.log"
User:

System user hawk-eventd should run as.

Group:

System group hawk-eventd should run as.

BindHost:

IP address or Hostname where hawk-balancerd should bind to.

Verbosity:

Provide the requested verbosity threshold to increase or decrease the volume of log output.

LogSource:

Specify the logging output option, Examples: LogFile or Syslog are both available options

LogFile:

Specify the destination output log file for logging if LogSource has been specified as LogFile.

1.3.8. Configure hawk-streamd

HAWK Stream Aggregation Daemon is a program for reading streams of data, and applying quick, short term aggregation (‘rolling’) the results to the user, or streaming the raw data directly.

hawk-streamd should be configured any location hawk-data (API) is configured.

  1. Edit the hawk-streamd configuration file.

user@host#: vi /opt/hawk/etc/hawk-streamd.cfg

#!HAWK
#
# Hawk Balancer Configuration File
# .sample hawk-streamd.cfg file
#

# User Definition
User="root"
Group="root"

# Service Port
BindPort="8082"

# Used for storing temporal information, as well as access token/session verification
MemcacheConfig="127.0.0.1:11211"

#MySQL Credentails, required for group access control, as well as fetching index information for stream offsets
MySQLHost="mysql://<USERNAME>:<PASSWORD>@<IP.Address.MySQL>/hawk4"

# Specify the used balancer hosts when leveraging a clustered environment
BalancerHost="tcp://127.0.0.1:40010"
# BalancerHost="tcp://127.0.0.2:40010"
# BalancerHost="tcp://127.0.0.3:40010"

# Aggregate and send records every x seconds interval
AggregationInterval="1"

# Maximum record limit
AggregationLimit="50000"

# Delay to sleep after sending in milliseconds
# SendDelay="250"
SendDelay="-1"


# IndexDirectorPrimary allows to enforce the index director to use a specific index type of primary
# Ex:
# IndexDirectorPrimary="test[Gg]roup-*"

# IndexDirectorSecondary allows to enforce the index director to use a specific index type of secondary
# Ex:
# IndexDirectorSecondary="test[Gg]roup-*"

#
# Verbosity - Verbosity of our Engine
#       0 - Only log errors, and warning
#       1 - include the above plus information
#       2 - include the above plus debugging
#
Verbosity="1"

# LogSource
#       LOGFILE
#       SYSLOG
LogSource="Logfile"
LogFile="/var/log/hawk/hawk-streamd.log"
User:

System user hawk-streamd should run as.

Group:

System group hawk-streamd should run as.

BindPort:

Port hawk-streamd should bind to.

MemcacheConfig:

Enter the Memcached server IP address or Hostname. If using a memcached pool separate IP addresses or Hostnames by a comma ‘,’.

MySQLHost:

Enter the IP address or Hostname where the Directory Service (MySQL) was installed.

BalancerHost:

Enter the IP address or Hostname where hawk-balancerd is installed. If leveraging a clustered environment list each one on a separate line.

AggregationInterval:

Interval in seconds hawk-streamd should aggregate and send records.

AggregationLimit:

Max amount of records hawk-streamd should aggregate per configured AggregationInterval.

SendDelay:

Delay in milliseconds between sends, -1 is disabled.

IndexDirectorPrimary:

Enforce the index director to use a specific index type of primary.

IndexDirectorSecondary:

Enforce the index director to use a specific index type of secondary.

Verbosity:

Provide the requested verbosity threshold to increase or decrease the volume of log output.

LogSource:

Specify the logging output option, Examples: LogFile or Syslog are both available options

LogFile:

Specify the destination output log file for logging if LogSource has been specified as LogFile.

1.3.9. Configure hawk-reports

  1. Edit the hawk-reports configuration file.

user@host#: vi /opt/hawk/etc/hawk-reports.cfg

[API]
server = <HAWK-DATA>
username = <USERNAME>
password = <PASSWORD>

[SETTINGS]
destination = /var/www/hawk-data/reports/
hosturl = https://<HAWK-DATA>
smtp_from = no-reply@hawkdefense.com
insecure = true
timeout = 300
retry = 3
threads = 10
parallel = 2
save_results = false
send_emails = true
[API]
server:

Enter the IP address or Hostname where HAWK API is installed.

username:

Enter the username of a service account created in the HAWK UX.

password:

Enter the password for the service account.

[SETTINGS]
destination:

System path to save reports.

hosturl:

IP address or Hostname to website. (This will be used to link reports in e-mail notifications.)

smtp_from:

E-mail address you want e-mail notification to come from.

insecure:

If using a self-signed SSL certificate this must be set to ‘true’. If using a vaild SSL certificate set to ‘False’.

timeout:

Set timeout in seconds. Recommended 90 seconds.

retry:

How many times to retry after timeout failure.

threads:

How many concurrent threads to run.

parallel:

How many reports to process at one time.

save_results:

If value equals ‘true’ all results from reports will be saved. Saved results allow you to reprocess any completed report with different settings without having to query the data tier. If set to false it will be required to re-query the data tier.

send_emails:

If value equals ‘true’ e-mails will be sent when completed.

1.3.10. Configure hawk-updatesd

hawk-updatesd is responsible for updating threat intelligence.

  1. Edit the hawk-updatesd configuration file.

user@host#: vi /opt/hawk/etc/hawk-updatesd.cfg

[API]
server = <HAWK-DATA>
username = <USERNAME>
password = <PASSWORD>

[SETTINGS]
smtp_from = no-reply@hawkdefense.com
insecure = true
timeout = 900
retry = 3
parallel = 5
send_emails = true
# specify proxy type, ie: http
# proxy_type = http
# proxy_host = http://1.1.1.1:80
[API]
server:

Enter the IP address or Hostname where HAWK API is installed.

username:

Enter the username of a service account created in the HAWK UX.

password:

Enter the password for the service account.

[SETTINGS]
smtp_from:

E-mail address you want e-mail notification to come from.

insecure:

If using a self-signed SSL certificate this must be set to ‘true’. If using a vaild SSL certificate set to ‘False’.

timeout:

Set timeout in seconds. Recommended 90 seconds.

retry:

How many times to retry after timeout failure.

parallel:

How many threads to use at one time to collect threat intelligence.

send_emails:

If value equals ‘true’ e-mails will be sent when completed.

proxy_type:

Type of proxy that is needed for the server to connect to the Internet. Currently support ‘http’ and ‘https’.

proxy_host:

IP address or Hostname of proxy server.

1.3.11. Add shards to UX

To add a vStream shard and/or mongo shard please see Operations Manual Adding Shards

1.3.12. Add license key

To add a license key please see Operations Manual Adding License Key

1.4. Updating HAWK Engines

1.4.1. Configure hawk-analyticsd

  1. Edit hawk-analyticsd.cfg file.

#!HAWK
# Hawk SysLog Configuration File
# .sample hawk-analyticsd.cfg file
#

# Unique Name
HawkName="HAWK5-ECE-01"

# Authenticated Data Store
#HAWKUrl="https://admin:password@hawk5-server3:8080/API/1.1"
HAWKUrl="https://admin:password@hawk5-server3:8080/API/1.1"

# SSL Configuration
# Toggle SSL Peer Verification
HTTPSSLVerifyPeer="False"
HTTPSSLVerifyHost="False"

BalancerHost="tcp://127.0.0.1:40010"

# Memcache atomic counter configuration
MemcacheConfig="127.0.0.1:11211"

# User Definition
User="root"
Group="root"

# Number of threads to be used for normalization
NormalizationThreadCount=4

### Queue Configuration

#WriteToDiskCompression="False"

# Maximum amount of queue threads we want to startup
QueueThreadCount=4

# Maximum amount of time event statistics should be tracked
EventCacheTimeOut=28800

# Enable DNS Resolution (slower insertion)
EnableDNS="True"

GeoIPFile="/opt/hawk/etc/GeoLiteCity.dat"

#
# Verbosity - Verbosity of our Engine
#       0 - Only log errors, and warning
#       1 - include the above plus information
#       2 - include the above plus debugging
#
Verbosity="1"

# LogSource
#       LOGFILE
#       SYSLOG
LogSource="Logfile"
LogFile="/var/log/hawk/hawk-analyticsd.log"


EnableAggregation="True"

AggregationRule="alert_name, ip_src, ip_dst, ip_proto, ip_dport, correlation_username, target_username, audit_login"

AggregationTimeWindow="5"

CacheStoreDb="/opt/hawk/analytics"

# Format:
# map, replace: index_field[,...]

# Map Process ID to Application Name
CacheMapReplace="pid, app: group_name, resource_addr"

# Map Dport to Application name, inherit and update if possible
CacheMapReplace="ip_dport, app: group_name, ip_dst"
HAWKName

Specifies a unique name for the engine, which is used for access control, as well as scalability and availability. Example value: HAWK-ENGINE-01 where the unique name will be seen in the HAWK Resource Manager

HAWKUrl:

Connection string to connect to HAWK API. Username and Password should be the service account you setup in the HAWK UX. Also, included the IP address or Hostname of the HAWK API.

HTTPSSLVerifyPeer:

Set to False if using a self-signed SSL certificate.

HTTPSSLVerifyHost:

Set to False if using a self-signed SSL certificate.

BalancerHost:

Enter the IP address or Hostname where hawk-balancerd is installed. If leveraging a clustered environment list each one on a separate line.

MemcacheConfig:

Enter the Memcached server IP address or Hostname. If using a memcached pool separate IP addresses or Hostnames by a comma ‘,’.

User:

System user hawk-analyticsd should run as.

Group:

System group hawk-analyticsd should run as.

NormalizationThreadCount:

Specifies the number of parallel threads to normalize events.

WriteToDiskCompression:

Enable compression on local disk storage of JSON file archive. Default is True

QueueThreadCount:

Specifies the number of parallel threads to correlate, score, and write the events to the specified datastores.

EventCacheTimeOut:

Specify the maximum amount of time event statistics should be tracked (in seconds). Default is 20

EnableDNS:

Specify whether or not to perform DNS lookup requests during the event storage process. This has the potential to significantly degrate write performance. Default: True

GeoIPFile:

Provide the location to our localized GeoIP lookup dataset.

Verbosity:

Provide the requested verbosity threshold to increase or decrease the volume of log output.

LogSource:

Specify the logging output option, Examples: LogFile or Syslog are both available options

LogFile:

Specify the destination output log file for logging if LogSource has been specified as LogFile.

EnableAggregation:

Specifies whether or not to enable data aggregation support.

AggregationRule:

Each AggregationRule specifies a strict record search for the fields provided. If one of the fields does not exist, the aggregation rule will be skipped.

AggregationTimeWindow:

Specifies in the number of seconds, how long to aggregate the events from the time of arrival, until the time for correlation.

CacheStoreDb:

Local database to be used for hawk-analyticsd cache.

CacheMapReplace:

Rules to cache event information to be used to perform document enrichment.

Note

To get more detailed information about hawk-analyticsd. user@host:# man hawk-analyticsd.cfg

1.4.2. Configure hawk-balancerd

Note

Typically its not required to change any default settings for hawk-balancerd.

  1. Edit the hawk-balancerd configuration file.

user@host#: vi /opt/hawk/etc/hawk-balancerd.cfg

#!HAWK
#
# Hawk Balancer Configuration File
# .sample hawk-balancer.cfg file
#

# User Definition
User="root"
Group="root"

# Local ip and port to bind to for receiving messages for brokering
BindHost="tcp://0.0.0.0:40010"

#
# Verbosity - Verbosity of our Engine
#       0 - Only log errors, and warning
#       1 - include the above plus information
#       2 - include the above plus debugging
#
Verbosity="1"

# LogSource
#       LOGFILE
#       SYSLOG
LogSource="Logfile"
LogFile="/var/log/hawk/hawk-balancerd.log"
User:

System user hawk-eventd should run as.

Group:

System group hawk-eventd should run as.

BindHost:

IP address or Hostname where hawk-balancerd should bind to.

Verbosity:

Provide the requested verbosity threshold to increase or decrease the volume of log output.

LogSource:

Specify the logging output option, Examples: LogFile or Syslog are both available options

LogFile:

Specify the destination output log file for logging if LogSource has been specified as LogFile.

1.4.3. Configure hawk-pulsed

  1. Edit the hawk-pulsed configuration file.

user@host#: vi /opt/hawk/etc/hawk-pulsed.cfg

#!HAWK
#
# Hawk Pulse Configuration File
# .sample hawk-pulsed.cfg file
#

# Unique Name
HawkName="HAWK-ECE-01"

# Authenticated Data Store
HAWKUrl="https://username:password@server1:8080/API/1.1"

# SSL Configuration
# Toggle SSL Peer Verification
HTTPSSLVerifyPeer="False"
HTTPSSLVerifyHost="False"

HTTPCredentialSecret="example-secret"
HTTPCredentialSecret="example-secret"

# Hosts responsible for saving our results
ForwardHost="tcp://127.0.0.1:40010"

# User Definition
# So we're not running as root
User="root"
Group="root"

#
# Resource Configuration
#
# Resource Threads for processing each resource.
ResourceThreadCount=3
# Polling Timeout in seconds
ResourcePollTimeout=5

#
# Verbosity - Verbosity of the Hawk Engine
#       0 - Only log errors, and warning
#       1 - include the above plus information
#       2 - include the above plus debugging

Verbosity="1"

#
LogSource="LogFile"
LogFile="/var/log/hawk/hawk-pulsed.log"
HAWKName

Specifies a unique name for the engine, which is used for access control, as well as scalability and availability. Example value: HAWK-ENGINE-01 where the unique name will be seen in the HAWK Resource Manager

HAWKUrl:

Connection string to connect to HAWK API. Username and Password should be the service account you setup in the HAWK UX. Also, included the IP address or Hostname of the HAWK API.

HTTPSSLVerifyPeer:

Set to False if using a self-signed SSL certificate.

HTTPSSLVerifyHost:

Set to False if using a self-signed SSL certificate.

HTTPCredentialSecret:

Specifies the pre-determined shared key secret used for decrypting credentials from the API.

ForwardHost:

Enter the Local IP address or Hostname of hawk-balancerd.

User:

System user hawk-eventd should run as.

Group:

System group hawk-eventd should run as.

ResourceThreadCount:

Specifies the number of parallel threads for processing each resource.

ResourcePollTimeout:

Specifies the timeout in seconds for polling data from each resource.

Verbosity:

Provide the requested verbosity threshold to increase or decrease the volume of log output.

LogSource:

Specify the logging output option, Examples: LogFile or Syslog are both available options

LogFile:

Specify the destination output log file for logging if LogSource has been specified as LogFile.

Note

To get more detailed information about hawk-analyticsd. user@host:# man hawk-pulsed.cfg

1.4.4. Configure hawk-syslogd

Note

Typically its not required to change any default settings for hawk-syslogd.

  1. Edit the hawk-syslogd configuration file.

user@host#: vi /opt/hawk/etc/hawk-syslogd.cfg

#!HAWK
#
# Hawk SysLog Configuration File
# .sample hcslogd.cfg file
#

# User Definition
User="root"
Group="root"

# Syslog Configuration
LogHost="udp://0.0.0.0:514"
LogHost="tcp://0.0.0.0:514"
LogHost="ssl://0.0.0.0:8514"

# Hosts responsible for saving our results
ForwardHost="tcp://127.0.0.1:40010"

# SSL Configuration
SSLLease="1024"
SSLCrt="/opt/hawk/etc/ssl.crt"
SSLCsr="/opt/hawk/etc/ssl.csr"
SSLKey="/opt/hawk/etc/ssl.key"
SSLSecret="example-secret"
SSLSubject="CN=hawkdefense.com/O=HAWK Network Defense, Inc./C=US/ST=TX/L=Dallas"

#
# Verbosity - Verbosity of our Engine
#       0 - Only log errors, and warning
#       1 - include the above plus information
#       2 - include the above plus debugging
#
Verbosity="1"
# LogSource
#       LOGFILE
#       SYSLOG
LogSource="Logfile"
LogFile="/var/log/hawk/hawk-syslogd.log"
User:

System user hawk-eventd should run as.

Group:

System group hawk-eventd should run as.

LogHost:

Specifies parameters for binding on specific addresses and protocols for syslog event feeds.

ForwardHost:

Enter the Local IP address or Hostname of hawk-balancerd.

Verbosity:

Provide the requested verbosity threshold to increase or decrease the volume of log output.

SSLLease:

Specify the amount of days our generated SSL certificate is valid for.

SSLCrt:

Specify the destination to store our SSL certificate file.

SSLCsr:

Specify the destination to store our SSL certificate request.

SSLKey:

Specify the destination to store our SSL keyfile.

SSLSecret:

Specify the SSL passphrase used for generating the SSL certificate.

SSLSubject:

Specify the SSL certificate parameters for certificate generation.

LogSource:

Specify the logging output option, Examples: LogFile or Syslog are both available options

LogFile:

Specify the destination output log file for logging if LogSource has been specified as LogFile.

Note

To get more detailed information about hawk-analyticsd. user@host:# man hawk-syslogd.cfg

1.4.5. Configure hawk-eventd

hawk-eventd should be configured any location hawk-data (API) is configured.

  1. Edit the hawk-eventd configuration file.

user@host#: vi /opt/hawk/etc/hawk-eventd.cfg

#!HAWK
#
# Hawk Event Daemon Configuration File
User="root"
Group="root"

HAWKUrl="https://admin:password@hawk5-server3:8080/API/1.1"

# Toggle SSL Peer Verification
HTTPSSLVerifyPeer="False"
HTTPSSLVerifyHost="False"

Mode="HTTP"

LogDirectory="/opt/hawk/events"
# System Configuration
QueueThreadCount=4
Verbosity="1"
User:

System user hawk-eventd should run as.

Group:

System group hawk-eventd should run as.

HAWKUrl:

Connection string to connect to HAWK API. Username and Password should be the service account you setup in the HAWK UX. Also, included the IP address or Hostname of the HAWK API.

HTTPSSLVerifyPeer:

Set to False if using a self-signed SSL certificate.

HTTPSSLVerifyHost:

Set to False if using a self-signed SSL certificate.

Mode:

Enter the mode hawk-eventd should run as. On data tier mode should be set to “Messages”. If on engine tier mode should be set to “HTTP”

LogDirectory:

Location of event files saved by hawk-analyticsd.

QueueThreadCount:

Amount of threads to be used to process event files.

Verbosity:

Provide the requested verbosity threshold to increase or decrease the volume of log output.

1.4.6. Restarting Services

After you are done configuring the HAWK engine the services must be restarted for the changes to take effect.

user@host:# service hawk-balancerd restart

user@host:# service hawk-analyticsd restart

user@host:# service hawk-pulsed restart

user@host:# service hawk-syslogd restart

user@host:# service hawk-eventd restart