Skip to main content

Integrating ODL with PNDA open source data analytics platform

By February 24, 2017March 28th, 2017Blog

Integrating ODL with PNDA open source data analytics platform

Blog originally posted on PNDAemic: The PNDA.io blog.

So you want to integrate OpenDaylight with PNDA!

It’s really easy to aggregate data into a PNDA instance. But it takes a little more work to stream data out of an OpenDaylight instance. There are three steps to set this up:

  1. Pick a data source, i.e. an OpenDaylight application that can supply data to an OpenDaylight event topic.
  2. Configure the OpenDaylight event aggregator to collect the desired data and publish it on a topic.
  3. Deploy the OpenDaylight Kafka Plugin and configure it to forward events to the Kafka message bus of your PNDA instance.

odltopnda

THE DATA SOURCE

If you have NECONF enabled devices that send notifications then the odl-netconf-connector might be sufficient to get started. Alternatively you can write a new OpenDaylight application to collect your desired dataset. I have chosen to write an application that uses SNMP to collect IF-MIB:ifTable data. You can get the application from github here: https://github.com/donaldh/if-table-collector.

Once built, the if-table-collector can be started as a standalone karaf instance:

$ cd if-table-collector
$ ./karaf/target/assembly/bin/karaf

Check that the if-table-collector is running:

opendaylight-user@root>bundle:list | grep if-table
270 | Active | 80 | 0.1.0.SNAPSHOT | if-table-collector-api
274 | Active | 80 | 0.1.0.SNAPSHOT | if-table-collector-impl

Now you can tell the application to collect ifTable data from a device. The application uses an augmentation of topology-netconf to enable SNMP collection for a device. Here’s an example:

POST http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf
Content-Type: application/xml
<node xmlns=”urn:TBD:params:xml:ns:yang:network-topology”>
<node-id>{node-name}</node-id>
<host xmlns=”urn:opendaylight:netconf-node-topology”>{ip-address}</host>
<port xmlns=”urn:opendaylight:netconf-node-topology”>{netconf-port}</port>
<username xmlns=”urn:opendaylight:netconf-node-topology”>{username}</username>
<password xmlns=”urn:opendaylight:netconf-node-topology”>{password}</password>
<tcp-only xmlns=”urn:opendaylight:netconf-node-topology”>false</tcp-only>
<keepalive-delay xmlns=”urn:opendaylight:netconf-node-topology”>0</keepalive-delay>
<snmp-community xmlns=”urn:net:donaldh:if-table-collector”>{snmp-community}</snmp-community>
<poll-interval xmlns=”urn:net:donaldh:if-table-collector”>60</poll-interval>
</node>

You should see the device being polled every 60 seconds:

2017-02-22 16:48:24,079 | INFO | Executing poll cycle for 10.1.1.105 …
2017-02-22 16:48:24,460 | INFO | Polled 6 rows.

CONFIGURING THE EVENT AGGREGATOR

The OpenDaylight event aggregator is responsible for gathering events from the desired data sources within OpenDaylight and publishing them on an specific topic on the OpenDaylight message bus. To get started, we will use a simple topic configuration which matches all notification names for all nodes:

POST http://localhost:8181/restconf/operations/event-aggregator:create-topic
{ “event-aggregator:input”:
{ “notification-pattern”: “*”, “node-id-pattern”: “.*” }
}

DEPLOYING THE KAFKA PLUGIN

The odl-kafka-plugin can be built and added to any OpenDaylight installation. I have a fork of it here that builds for OpenDaylight Boron-SR2: https://github.com/donaldh/odl-kafka-plugin. Once you have built the kafka-agent plugin, you can copy it to the karaf deploy directory to have it auto-deploy into the running karaf instance.

The odl-kafka-plugin simply registers for messages on the OpenDaylight message bus and sends them on to the kafka message bus. It can be configured to listen for specific topics or, by default, listen for any topic. I used the simplest configuration which listens for any topic:

PUT http://localhost:8181/restconf/config/kafka-agent:kafka-producer-config
{
kafka-producer-config: {
kafka-broker-list: “{kafka-host}:9092”,
kafka-topic: “odl”,
compression-type: “none”,
message-serialization: “avro”,
avro-schema-namespace:”com.example.project”
}
}

VERIFYING THE END RESULTS

You can use the Kafka console consumer to verify that messages are being received by the Kafka bus. The messages are AVRO encoded so the textual representation looks a bit weird:

% ./bin/kafka-console-consumer.sh –bootstrap-server $(KAFKA_HOST):9092 –topic odl
?????V$if-table-collector172.16.1.105?
<?xml version=”1.0″ encoding=”UTF-8″?><payload xmlns=”urn:cisco:params:xml:ns:yang:messagebus:eventaggregator”><source>172.16.1.105</source><message><IfEntryBuilder><IfAdminStatus>Down</IfAdminStatus><IfInDiscards>0</IfInDiscards><IfInErrors>0</IfInErrors><IfInOctets>0</IfInOctets><IfInUcastPkts>0</IfInUcastPkts><IfInUnknownProtos>0</IfInUnknownProtos><IfIndex>6</IfIndex><IfMtu>1514</IfMtu><IfOperStatus>Down</IfOperStatus><IfOutDiscards>0</IfOutDiscards><IfOutErrors>0</IfOutErrors><IfOutOctets>373</IfOutOctets><IfOutUcastPkts>2</IfOutUcastPkts><IfPhysAddress/><IfSpeed>4294967295</IfSpeed><IfType>Eon</IfType></IfEntryBuilder></message></payload>

Assuming you have already configured the PNDA instance to accept kafka messages on the “odl” topic, you will start to see ifTable data in the HDFS datastore.