Quantcast
Channel: Tutorial/Howto – Weberblog.net
Viewing all 63 articles
Browse latest View live

MRTG/Routers2: Counting Traceroute Hops

$
0
0

I was interested in generating graphs within the MRTG/Routers2 monitoring system that display the number of hops for an IP connection through the Internet. In my opinion its interesting to see the different routing run times/hop counts e.g. for remote offices that are connected via dynamic ISP connections such as DSL. Therefore, I wrote a small script that executes a traceroute command which can be called from MRTG.

Traceroute Script

Here comes my script “traceroute2mrtg” which needs the destination host as a parameter:

traceroute2mrtg www.webernetz.net
 . It calls the “traceroute” command with a few options and stores the hop count in a variable. If the destination was not reachable for traceroute, the maximum hop count of “30” is change to a “0” to have the MRTG graphs to show *nothing* instead of *30* in this case.

I have moved this script to “/usr/local/bin” and, of course, made it executable:

sudo chmod u+x traceroute2mrtg
 .
#!/bin/bash
#######################################################################
#Author:           Johannes Weber (johannes@webernetz.net)            #
#Homepage:         http://blog.webernetz.net                          #
#Last Modified:    2014-01-07                                         #
#######################################################################

#Only one parameter for this tool: The destination
dest=$1

#Basic traceroute with:
#-I for using ICMP messages instead of UDP (needs root privileges)
#-n since the DNS lookups are not needed here
#-w 2 for only waiting 2 seconds until response for not slowing down MRTG to much
#Error messages into 2>/dev/null
#Piped into "tail -1" to have only the last line
#Piped to awk to return only the number of hops
hops=`traceroute -I -n -w 2 $dest 2>/dev/null | tail -1 | awk '{print $1}'`

#If the destination is unreachable, traceroute stops after 30 hops.
#In order to have an unreachable host displayed as "0" and not as "30", the value is changed in that case:
if [ $hops == 30 ]
  then hops=0
fi

#Output for MRTG (which basically needs 4 lines)
echo $hops
echo 0
#echo no uptime to report here
#echo traceroute $dest

Note that the reported values are NOT the actual hop count to the destination since the last line in a traceroute reveals the final destination. That is, the real hop count would be the reported value decremented by one. However, I am not changing this value by “-1″ because I do not want to confuse myself when comparing the MRTG graphs with some other traceroute tests.

Here is an example of my script:

weberjoh@jw-vm01:~$ sudo traceroute2mrtg www.webernetz.net
11
0

 

MRTG/Routers2 Config

Following is my MRTG/Routers2 configuration part for the hop count. It mainly calls my script with the destination host “domain.name”.

Target[foobar_traceroute]: `/usr/local/bin/traceroute2mrtg domain.name`
Title[foobar_traceroute]: Traceroute Hop Counts to domain.name
#MaxBytes: Since the standard traceroute command stops after 30 hops, this value fits for the MaxBytes, too.
MaxBytes[foobar_traceroute]: 30
Options[foobar_traceroute]: gauge
Colours[foobar_traceroute]: BROWN#660000, YELLOW#FFD600, BLACK#000000, ORANGE#FC7C01
YLegend[foobar_traceroute]: Number of Hops
Legend1[foobar_traceroute]: Hops
Legend3[foobar_traceroute]: Peak Hops
LegendI[foobar_traceroute]: Hops:
ShortLegend[foobar_traceroute]:  
routers.cgi*ShortDesc[foobar_traceroute]: Traceroute Hop Count
routers.cgi*Options[foobar_traceroute]: fixunit integer maximum nomax noo nopercentile nototal
routers.cgi*Icon[foobar_traceroute]: graph-sm.gif

Note that I specified the “maximum” option. That is, the weekly, monthly, and yearly graphs will show the maximum hop count for its ranges and NOT the averages with a maximum line above them. This is more useful if certain destinations are down for a longer time and MRTG stores “0”. If the maximum option would not be specified, the average values for a week would be much lower than realistic ones if the destination was not reachable for a few days, for example.

Also note that the average calculations on the graphs (the middle column with text) are NOT realistic if a destination is not reachable for a certain amount of time. It is only meaningful if the destination was reachable all the time. This behaviour cannot be changed here since there is no option to tell MRTG/Routers2 to “not display the Avg values if the values return a zero”.

Sample Graphs

Here are a few examples from my monitoring system. The first one reveals that the hop count to the destination changes incessantly between 15 and 16 hops. Here, the calculated average value of 15 makes sense:

nuern.cfg-nuern_traceroute-d-l2

The second one presents the hop count to a dynamic ISP connection which restarts every night. The hop count switches between 13 and 15 hops:

fdorf.cfg-fdorf_traceroute-w-l2

The last one shows the hop counts to one of the domains of the NTP Pool Project, in this case

0.de.pool.ntp.org
. This DNS name changes its IP addresses by round robin every 150 seconds. Since my MRTG installations runs every 5 minutes, I am getting a new IP address on every run. Of course all destinations have different hop counts, mainly between 10 and 14 hops. However, some of these nodes do not reply to the ping requests, which results in values of “0” in the graphs. That is, the calculated average values do NOT make sense here.

WeiterePings.cfg-WeiterePings_tr_0.de.pool.ntp.org-d-l2


MRTG/Routers2: Adding a Linux Host

$
0
0

This post describes how to add a Linux machine to the MRTG/Routers2 monitoring server. First, the host must be able to process SNMP requests. Then, a *.cfg file for MRTG/Routers2 is created by running the “cfgmaker” tool with a host-template. Since a few values are wrong in the cfgmaker file, I also explain how to correct them. Finally, I am adding the mrtg-ping-probe lines to the configuration.

Installation of SNMP

Two packages are mandatory for SNMP:

snmp
which is the tool to send SNMP requests to a machine, and
snmpd
which is the daemon to listen on port 161 for incoming SNMP requests. On a Ubuntu machine, both packets can easily be installed with:
sudo apt-get install snmp snmpd

After that, the snmpd configuration must be customized in order to allow incoming SNMP requests. This slightly depends on whether these requests are sent from the localhost (e.g., if the MRTG machine wants to query itself), or from a remote host over the network (which is the normal case). In both cases, the snmpd.conf file must be opened with:

sudo nano /etc/snmp/snmpd.conf

In the snmpd.conf, a paragraph called “AGENT BEHAVIOUR” specifies the listening state of snmpd. In order to allow incoming requests from any machine via IPv4, the following line fits:

agentAddress udp:161

In the paragraph called “ACCESS CONTROL” a line that allows an SNMP COMMUNITY for read-only access can be specified like this two examples:

rocommunity public localhost
rocommunity COMMUNITY 192.168.0.0/16

Now restart the daemon with

sudo /etc/init.d/snmpd restart
and test the configuration with the snmpwalk tool which runs through all (!) SNMP values the localhost presents:
snmpwalk -v 2c -c public localhost .1.3.6
. If there are many hundreds lines flowing over the screen, everything is ok.

Creating the CFG File

I am creating my *.cfg files for Linux servers with the host-generic.htp from Steve Shipways forum here. The main command is the following:

sudo cfgmaker --snmp-options=:::::2 --host-template=host-generic.htp --output=jw-vm01.cfg COMMUNITY@192.168.4.11

This creates the “jw-vm01.cfg” file and customizes the settings. This file must then be copied into the “/etc/mrtg/” folder. MRTG will then query it. However, I advise to change the following lines:

  • Delete all lines under the “### Global Config Options” because they are not needed. Especially the “WorkDir:” value is wrong at this position since it is already specified in the global mrtg.cfg configuration file if it is installed due to my tutorial.
  • The memory values are wrongly calculated with the host template: They need to be multiplied with 1024. E.g., instead of
    MaxBytes1[192.168.9.6-memory]: 448180
    it must be
    MaxBytes1[192.168.9.6-memory]: 458936320
    .
  • The CPU, memory, and the filesystem graphs are not shown from “0 – 100 %” but only from “0 – current max value %”. In my opinion I get more information if the graph always shows up to 100 %. So I added
    Unscaled[localhost-cpu]: dwmy
    to the CPU area,
    Unscaled[localhost-memory]: dwmy
    to the memory area, and
    Unscaled[localhost.disk.xy]: dwmy
    to all disks (replace “xy” by the disk value shown in the other lines).
  • I like all interfaces with a graph type of mirror. That is:
    routers.cgi*GraphStyle[interface]: mirror
    for all interfaces.
  • Furthermore, I am always adding a ping to the node (which is not much interesting at the localhost in this example here, though). This requires to have the “mrtg-ping-probe” packaged installed. I listed the MRTG/Routers2 config here.
  • Finally, I have a recurrent coloring style, i.e., yellow for CPU, pink for processes, turquoise for users, and orange for RAM. So I added the following lines inside the appropriate sections:
    Colours[localhost-cpu]: Lightyellow#FEED01, Blue#0000FF, Orange#FF6307, Purple#FF00FF
    Colours[localhost-memory]: Orange#FC7C01, Green#00CC00, Darkred#660000, Darkgreen#006600
    Colours[localhost-lavg]: Lightyellow#FEED01, Blue#0000FF, Orange#FF6307, Purple#FF00FF
    Colours[localhost-users]: Turquois#00CCCC, Darkyellow#CCCC00, Darkturquois#377D77, Orange#E97F02
    Colours[localhost-procs]: Pink#FF00AA, Yellow#FFD600, Darkpurple#7608AA, Orange#FC7C01

     

Here is an example of my localhost *.cfg file with all the above mentioned changes:

Graphs

If everything is ok and MRTG stores the values correctly, the graphs should grow every five minutes and look like that:

CPU space used on / Load average Processes Users Traffic eth0 Memory available Ping Times

Measuring Temperatures with PCsensor’s TEMPerHUM Sensor

$
0
0

I am always interested in capturing real values via hardware devices in order to generate the appropriate graphs with my monitoring system. Of course, the outside temperature in our city was at the pole position for such a project. Therefore I ordered a few temperature/humidity sensors from PCsensor (via eBay), plugged them via USB on my Raspberry Pi (Raspbian Linux), and queried them via SNMP from my MRTG/Routers2 monitoring server. Here is the whole story:

Hardware

The USB sticks can also be ordered from PCsensor directly. I am using the sensor called “TEMPerHUM” with the USB ID “1130:660c“. (Google is a really good source for finding similar projects using exactly this type of sensor!) It has a built-in temperature and humidity sensor. Here is a photo of the USB stick:

TEMPerHUM Stick

The following messages appeared after plugging the USB device in (

dmesg
 ):
Oct 16 10:41:25 jw-nb09 kernel: [1793988.148047] usb 2-1: new low-speed USB device number 2 using ohci_hcd
Oct 16 10:41:26 jw-nb09 kernel: [1793988.946117] input:  PCsensor Temper as /devices/pci0000:00/0000:00:1c.0/usb2/2-1/2-1:1.0/input/input5
Oct 16 10:41:26 jw-nb09 kernel: [1793988.946555] generic-usb 0003:1130:660C.0001: input,hidraw0: USB HID v1.10 Keyboard [ PCsensor Temper] on usb-0000:00:1c.0-1/input0
Oct 16 10:41:26 jw-nb09 kernel: [1793988.977273] input:  PCsensor Temper as /devices/pci0000:00/0000:00:1c.0/usb2/2-1/2-1:1.1/input/input6
Oct 16 10:41:26 jw-nb09 kernel: [1793988.977777] generic-usb 0003:1130:660C.0002: input,hidraw1: USB HID v1.10 Device [ PCsensor Temper] on usb-0000:00:1c.0-1/input1
Oct 16 10:41:26 jw-nb09 kernel: [1793988.977829] usbcore: registered new interface driver usbhid
Oct 16 10:41:26 jw-nb09 kernel: [1793988.977834] usbhid: USB HID core driver

With 

lsusb
the following “Foot Pedal/Thermometer” appeared:
Bus 001 Device 005: ID 1130:660c Tenx Technology, Inc. Foot Pedal/Thermometer

Driver etc.

After quite a few tests with different drivers for this sensor I found a working one, called HID-TEMPerHUM. It needs “libusb-dev” as a prerequisite:

sudo apt-get install libusb-dev

After a download of the HID-TEMPerHUM driver (

git clone https://github.com/jeixav/HID-TEMPerHUM
 ) it must be built with
make
, made executable with 
sudo chmod u+x temper
  and should be moved to the following folder:
sudo mv temper /usr/local/bin/
. The driver needs root privileges. An example call looks like that:
pi@jw-pi01 ~ $ sudo temper
8.50 67.45

The first value is the temperature while the second one is the humidity.

Two improvements of this procedure are: Change the ownership of the program with 

sudo chown root temper
  and set the setuid bit so that every user can run this executable with the rights of the owner:
sudo chmod +s temper
 .

So, the first steps are done. Great. ;)

Script for MRTG

However, I wanted to query the values via SNMP from my MRTG/Routers2 installation. At first this required a small change in the output of the “temper” program. I wrote a small script called “temperMRTG.sh” which calls the original temper program, but displays the two values without a dot on two distinct lines. This is my script:

#!/bin/bash

/usr/local/bin/temper | sed s/[.]//g | tr [:space:] \\n

#echo "Line3"
#echo "Line4"

Make it executable with

chmod u+x temperMRTG.sh
  and move it to the same folder as the mere temper program:
sudo mv temperMRTG.sh /usr/local/bin/
. A call now looks like that:
pi@jw-pi01 ~ $ temperMRTG.sh
876
6718

 

Querying the Sensor via SNMP

(Not exactly true: You are querying the Raspberry Pi on which the USB sensor is plugged in.) The final step is to add the script in the snmpd.conf file to make it accessible via SNMP. (For further information about the SNMP daemon, use my post about adding a Linux host to MRTG.) Open the snmpd.conf file

sudo nano /etc/snmp/snmpd.conf
and add the following line under the “EXTENDING THE AGENT” section:
extend-sh temperature /bin/sh /usr/local/bin/temperMRTG.sh

A restart of the snmpd is required:

sudo /etc/init.d/snmpd restart
. Now “walk the NET-SNMP-EXTEND-MIB tables [...] to see the resulting output”, as it is said in the snmpd.conf. Like every time, I am using the iReasoning MIB Browser for this job. This quickly revealed the following OIDs:

temperMRTG.sh via SNMP MIB Browser

MRTG/Routers2 Configuration

Finally, here is my MRTG/Routers2 configuration for this sensor. It queries the two OIDs that I found on the above step with the MIB browser. Note that these OIDs might be different on your system! Also note the “/100″ at the very end of the first line which shifts the comma for the values two times to the left. I am using a graph with two y-axis, one on the left (temperature) and one on the right (humidity). Refer to the sample screenshots at the end of this post for a better understanding.

Target[temperature_fdorf-outside]: 1.3.6.1.4.1.8072.1.3.2.4.1.2.11.116.101.109.112.101.114.97.116.117.114.101.1&1.3.6.1.4.1.8072.1.3.2.4.1.2.11.116.101.109.112.101.114.97.116.117.114.101.2:pA7EHRaKQZfm@192.168.86.5:::::2 /100
MaxBytes[temperature_fdorf-outside]: 100
#Since my USB device sometimes reports humidity values greater 100, I am using the AbsMax value here:
AbsMax[temperature_fdorf-outside]: 120
Title[temperature_fdorf-outside]: Temperature & Humidity -- Fdorf Outside
Options[temperature_fdorf-outside]: gauge
#for only printing the Peak Value on the yearly-graph
WithPeak[temperature_fdorf-outside]: y
Colours[temperature_fdorf-outside]: Red#FF0000, Blue#0000FF, Darkred#800000, Purple#FF00FF
YLegend[temperature_fdorf-outside]: Temperature °C
Legend1[temperature_fdorf-outside]: Temperature
Legend2[temperature_fdorf-outside]: Humidity
Legend3[temperature_fdorf-outside]: Peak Temperature
Legend4[temperature_fdorf-outside]: Peak Humidity
LegendI[temperature_fdorf-outside]: Temperature:
LegendO[temperature_fdorf-outside]: Humidity:
ShortLegend[temperature_fdorf-outside]: °C
routers.cgi*Options[temperature_fdorf-outside]: fixunit nomax nopercentile nototal
#routers.cgi*GraphStyle[temperature_fdorf-outside]: lines
routers.cgi*ShortDesc[temperature_fdorf-outside]: Fdorf Outside
routers.cgi*ShortLegend2[temperature_fdorf-outside]:  %
routers.cgi*YLegend2[temperature_fdorf-outside]: Humidity %
routers.cgi*ScaleShift[temperature_fdorf-outside]: 4
#routers.cgi*UpperLimit[temperature_fdorf-outside]: 30
#This one does not work with negative values:
#routers.cgi*LowerLimit[temperature_fdorf-outside]: -10
#And this one does not exist at all :( The workaround is to use ScaleShift
#routers.cgi*UpperLimit2[temperature_fdorf-outside]: 100

 

Changing the RRD Files

In order to store negative values (which is not common an a mere MRTG installation since it only reports positive traffic through routers/switches and not negative temperature values), the RRD files of the temperature must be adjusted a bit. The program rrdtune does that. At first, move to the folder in which all RRD files are stored. In my case, this is “/var/mrtg/”. Locate the correct *.rrd file and read out the file with:

rrdtool info temperature_fdorf-outside.rrd

The interesting line looks like that:

ds[ds0].min = 0.0000000000e+00

To have a minimum value of “-100″ stored, the rrd must be changed with root privileges such as:

sudo rrdtool tune temperature_fdorf-outside.rrd --minimum ds0:-100

After that when doing an “info” on more time, the line should look like that:

ds[ds0].min = -1.0000000000e+02

DONE!!! ;)

Sample Screenshots

Since the temperature was not that interesting in Germany during the last few weeks, these graphs look quite boring, i.e., no negative temperatures, or the like. However, here we go:

The weekly graph. The montly graph. The yearly graph (not filled up with data yet).

By the way, this is the sensor on the outside at our balcony:

TEMPerHUM Stick Outside

Though it is not meant to be used outside a building, I have not encountered any problems with this USB sensor. Of course it is placed under a small roof and therefore protected from direct rain and weather, which has the disadvantage that the measured temperature is a little bit different from the real temperature due to the window near to the sensor.

Links

Policy Based Forwarding (PBF) on a Palo Alto Firewall

$
0
0

This is a small example on how to configure policy based forwarding (PBF) on a Palo Alto Networks firewall. The use case was to route all user generated http and https traffic through a cheap ADSL connection while all other business traffic is routed as normal through the better SDSL connection. Since I ran into two problems with this simple scenario, I am showing the solutions here.

The covered PAN-OS version during this tests was 6.0.0.

The mere setup is really easy: The SDSL router is behind eth1/1 and a default route on the PA points to that router. The ADSL router resides behind eth1/2 and has NO static route entry in the router on the PA. No second virtual router or the like is needed. (This limits the usage of the second ISP connection. It can only be used for this policy based forwarding and not for incoming connections. E.g., no remote access VPN tunnels can terminate at this second connection since it has no default route to the router backwards. However, static Site-to-Site VPNs could be used if the remote endpoint has an entry in the routing table.)

The routing decision based on the destination ports 80 and 443 are made within the Policy Based Forwarding rules in the Policies tab. The following screenshots document my policy. Note that I have NOT selected the applications “web-browsing” and “ssl” but the mere ports, i.e., services. This is due to the fact that the PA cannot decide which application it sees based on the very first packet. Therefore, I simply forward all requests to the ports 80 and 443 to the ADSL connection:

PBF 1 General The source is specified to a single zone and the appropriate IPv4 address space in that zone. The PBF decision is made on the layer 4 ports on the outgoing connections. Route it to the ADSL router.

The following screenshot shows that the same destination is called once with ping and once with http. According to the PBF, both connections take a different egress interface:

PBF same src-dst IP but different ports

Do not PBF my Private Networks

My policy was a bit too wide: I was not able to reach my own http servers on the LAN anymore. ;) Of course, the single PBF rule forwards all http requests to the ADSL router. The solution was to add a second PBF rule BEFORE the already existing one, which has the destination IP addresses set to all the internal IPv4 addresses (e.g., all RFC1918 addresses) and an action of “No PBF”.

IPv4 to the Left, IPv6 to the Right

Another problem in my scenario was IPv6 since I could not route my global unicast IPv6 space from the ISP with the SDSL connection through another ADSL connection. :( (Really bad, because IPv6 is the solution for so many other cases. Here it is a bit difficult. And yes, the hated NAT for IPv4 makes it easy to use PBF in this scenario.)

The solution was to configure another “no-pbf” rule that forwards all IPv6 packets to its normal default router which is of course capable of this global unicast IPv6 range.

Here is a screenshot of my final policy:

PBF final policies

After that, all connections worked as expected. To show both links at the same time, a homepage that reveals both Internet protocol addresses such as the german www.wieistmeineip.de site can be used. Here, the IPv4 address from the dynamic ADSL connection as well as the global unicast IPv6 address (with privacy extensions enabled) are shown:

wieistmeineip.de IPv4-to-the-left IPv6-to-the-right

The egress interface can be seen in the traffic log. IPv4 connections are correctly forwarded to the ADSL router on eth1/2 while IPv6 traffic still goes out on eth1/1:

Traffic Log IPv4-to-the-left IPv6-to-the-right

Reference

Policy-Based Routing (PBR) on a Juniper ScreenOS Firewall

$
0
0

Here comes an example on how to configure policy-based routing (PBR) on a Juniper ScreenOS firewall. The requirement at the customers site was to forward all http and https connections through a cheap but fast DSL Internet connection while the business relevant applications (mail, VoIP, ftp, …) should rely on the reliable ISP connection with static IPv4 addresses.

I am showing the five relevant menus to configure PBR on the ScreenOS GUI.

The software version running during this test on the Juniper SSG5 was 6.3.0r16a.0.

Policy within five Submenus

The PBR configuration is straightforward through the five submenus under Network -> Routing -> PBR. The Extended ACL defines the relevant IP & Port connections which are grouped in a Match Group. The Action Group defines the forwarding to the DSL router. The Match and Action Group are tied together in a Policy which is then added to an interface in the Policy Binding.

As always, here are my configuration screenshots:

The Extended ACL defines the source IPv4 ranges and the destination ports of 80 (http) and 443 (https). The destination IPv4 address is set to any (0.0.0.0/0). Since I am having only one ACL, the match group called "Match-Surf-DSL" looks quite boring. The "Action-Surf-DSL" action group defines the forwarding to the DSL router behind interface eth0/3. The "Policy-Surf-DSL": Connections that match the Match Group take the action in the Action Group. Finally, the Policy Binding on the incoming interface of the traffic: The "Policy-Surf-DSL" is tied to eth0/5.10 (DMZ) .

I was not quite sure on which VR/Zone/Interface the policy must be binded to. This document from Juniper points to the interface while this refers to the zone and the interface. However, it worked after binding the policy to the interface only and it worked after an additional binding to the zone, too.

Of course, a security policy must also be configured. For the sake of completeness I am showing my single policy with a SNAT, too:

A single any-any policy. Nothing interesting to see here. ;) But a source translation to the interface IP. This conserves the reverse route on the DSL router.

PBR with different Virtual Routers

I also tried the concept with two virtual routers – one for each ISP connection. In this way, incoming connections through the DSL router would be possible, e.g., for VPNs, because it has its own default route back to the Internet. Unfortunately I was not able to correctly configure the policy-based routing to another virtual router though I followed this document from Juniper. Maybe I misunderstood something about the “self-referenced host route”. However, in my opinion this concept from Juniper looks not reliable at all. Therefore, I am using the normal PBR scenario without having the possibility to accept incoming connections.

Links

Basic ISP Load Balancing with a Cisco Router

$
0
0

“We have two independent DSL connections to the Internet and want to share the bandwidth for our users.” This was the basic requirement for a load balancing solution at the customers site. After searching a while for dedicated load balancers and thinking about a Do-It-Yourself Linux router solution, I used an old Cisco router (type 2621, about 40,- € on eBay) with two default routes, each pointing to one of the ISP routers. That fits. ;)

Configuration

I configured the router with two interfaces/networks: One facing to the two ISP routers (10.49.253.0/24) and the other one facing to the internal firewall (transfer network 10.49.254.0/24).

Then I added two default routes to the two ISP routers (AVM FRITZ!Boxen):

ip route 0.0.0.0 0.0.0.0 FastEthernet0/0 10.49.253.1
ip route 0.0.0.0 0.0.0.0 FastEthernet0/0 10.49.253.2

That is, the routing table looks like that (note the last two lines):

Guest-Loadbalancer01#show ip route
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
       E1 - OSPF external type 1, E2 - OSPF external type 2
       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
       ia - IS-IS inter area, * - candidate default, U - per-user static route
       o - ODR, P - periodic downloaded static route

Gateway of last resort is 10.49.253.1 to network 0.0.0.0

     10.0.0.0/8 is variably subnetted, 5 subnets, 3 masks
[...]
C       10.49.254.0/24 is directly connected, FastEthernet0/1
C       10.49.253.0/24 is directly connected, FastEthernet0/0
S*   0.0.0.0/0 [1/0] via 10.49.253.1, FastEthernet0/0
               [1/0] via 10.49.253.2, FastEthernet0/0

 

From now on, every new IPv4 connection to the outside is routed alternately to one of the default routes. Connections to the same destination IPv4 address are routed through the same router.

Functional Test

For testing purposes I browsed to a few different what-is-my-ip homepages such as my own http://ip.webernetz.net/ script or http://www.wieistmeineip.de/. This immediately revealed the two different IPv4 connections as seen in this screenshots:

2014-03-06 10_32_28-ipv4.webernetz.net - Iron 2014-03-06 10_32_42-Wie ist meine IP-Adresse_ - Iron

Speed Test

Both ISP connections have a DSL download capability of almost 10 MBit/s = 1,25 MByte/s. I ran a basic test with two downloads of Knoppix with the result that both downloads used their capacity completely. The overall download rate was about 2 MB/s.

The following two graphs show the CPU usage of the Cisco 2621 router. During the first graph, one of the two downloads finished, so the CPU usage decreased:

Guest-Loadbalancer01#show processes cpu history

Guest-Loadbalancer01   04:57:15 PM Monday Mar 3 2014 UTC

    1111111111122222111111111111111222224444444444444444444444
    8666667777700000999999999966666555550000088888777776666655
100
 90
 80
 70
 60
 50                                          *******************
 40                                     ************************
 30                                *****************************
 20 ************************************************************
 10 ************************************************************
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5
               CPU% per second (last 60 seconds)
                         1
    544545564541 3       0 13
    13488208716125     540876            11 121121  1  1 111
100                      *
 90                      *
 80                      *
 70        *             *
 60    *   *             *
 50 *  ********          *
 40 ##########*  *       *  *
 30 ###########  *       #  *
 20 ###########  *       # **
 10 ###########* *     * #***
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5
               CPU% per minute (last 60 minutes)
              * = maximum CPU%   # = average CPU%

That is, the router is more than 50 % busy with this two downloads. However, for the guest Wifi, it fits. ;)

Monitoring MAC-IPv6 Address Bindings

$
0
0

In the IPv4 world, the DHCP server allocates IPv4 addresses and thereby stores the MAC addresses of the clients. In the IPv6 world, if SLAAC (autoconfiguration) is used, no network or security device per se stores the binding between the MAC (layer 2) and the IPv6 (layer 3) addresses from the clients. That is, a subsequent analysis of network behaviour corresponding to concrete IPv6 addresses and their client machines is not possible anymore.

A simple way to overcome this issue is to install a service that captures Duplicate Address Detection (DAD) messages from all clients on the subnet in order to store the bindings of MAC and IPv6 addresses. This can be done with a small Tcpdump script on a dedicated Ethernet interface of a Linux host.

In this blog post I will present a use case for storing these bindings, the concept of the DAD messages, a Tcpdump script for doing this job, and the disadvantages and alternatives of this method.

My Use Case

A customer wanted to test IPv6 in order to achieve knowledge for the IT admins. We decided to activate IPv6 on the guest/BYOD Wifi. In this way, many users and clients will operate with IPv6 without touching the productive LAN. One of the first questions in this scenario was: How can we know which device had which IPv6 address at a given time? How can we trace back which device (user) has done some malicious actions when we only see many different “privacy extended” IPv6 addresses in the firewall log without any MAC addresses?

Since we had no layer 2 switches that support first-hop security services such as neighbor discovery monitor or router advertisement guard, the idea was to use a host on the local network to sniff and “search” for new IPv6 addresses and their correspondent MAC addresses.

Duplicate Address Detection

Every IPv6 node on the network that wants to register a new IPv6 address must run the duplicate address detection (DAD) method in order to find out whether this IPv6 address is already used by another IPv6 node: “Duplicate Address Detection MUST be performed on all unicast addresses prior to assigning them to an interface, regardless of whether they are obtained through stateless autoconfiguration, DHCPv6, or manual configuration, [...]“, RFC 4862, Section 5.4. This ICMPv6 neighbor solicitation message is sent to the solicited-node multicast address of the target address. (Remember: There are no broadcasts anymore with IPv6.) However, since common layer 2 switches do not maintain multicast listener groups, they simply forward these messages to all switchports. That is: Our sniffing Linux host will receive all DAD messages that appear on the local network.

Here is a Wireshark screenshot of a DAD message. The Ethernet frame is sent to an IPv6 multicast address (beginning with 33:33:) with a source MAC address of the IPv6 node, in this case an Apple device. The message has ICMPv6 type 135 and a “Target Address” of its tentative link-local IPv6 address (fe80::…):

DAD message in Wireshark

The Tcpdump line for the same DAD message looks like that:

2014-01-06 13:45:20.997130 b8:ff:61:4f:98:f4 > 33:33:ff:4f:98:f4, ethertype IPv6 (0x86dd), length 78: (hlim 255, next-header ICMPv6 (58) payload length: 24) :: > ff02::1:ff4f:98f4: [icmp6 sum ok] ICMP6, neighbor solicitation, length 24, who has fe80::baff:61ff:fe4f:98f4

 

Tcpdump Script and Analysis

I am dividing my method in two sections: At first, a Tcpdump runs in the background in order to store all DAD messages in a *.pcap file. For the subsequent analysis, the stored *.pcap file is read into a textfile which can then be analysed using grep, etc.

For the capturing of IPv6 packets I recommend to use a second hardware interface on the Linux host which does not participate at layer 3 at all, i.e., has no IPv6 addresses:

sudo sysctl -w net.ipv6.conf.eth1.disable_ipv6=1
 . That is: It is only used for the passive sniffing on the network.

Creation of the Pcap File

The DAD messages can entirely be classified since they are “Neighbor Solicitation” ICMPv6 messages of type 135 (RFC 4861) and are always sent from a source address of “::” (RFC 4862). That is, the following Tcpdump filter captures only DAD messages:

ip6[40]=135 and src host ::
.

I am storing the logfiles in a subfolder under /var/log:

sudo mkdir /var/log/dad
. The complete Tcpdump command which uses the second interface card on the host and creates a new *.pcap every day looks like that:
tcpdump -i eth1 -G 86400 -w '/var/log/dad/dad_%Y-%m-%d.pcap' 'ip6[40]=135 and src host ::'

Grep an IPv6 Address

Since new pcap files are created after 24 hours, the program “mergecap” can be used to concatenate them to a single file. This gives a wider range for the searching of addresses. The following command merges all pcap files from the /var/log/dad/ folder into a single file:

mergecap -w dad_2014-01-all.pcap /var/log/dad/dad_2014-01-*

In the next step, tcpdump is used to read out the file and to write it into a textfile. I am using it with three more options:

  • “-e” to print the link-level header on each dump line. This reveals the MAC addresses.
  • “-n” to not convert addresses (i.e., host addresses, port numbers, etc.) to names. This omits the “(oui Unknown)” strings.
  • “-tttt” to print the date (and not only the time) for each packet.

This is my example in which the “dad_2014-01-all.txt” file is created:

tcpdump -e -n -tttt -r dad_2014-01-all.pcap > dad_2014-01-all.txt

Here is a sample output of this textfile. Among others it shows the date and time, the source MAC address, and the target IPv6 address.

2014-01-09 12:21:48.026418 90:18:7c:da:81:46 > 33:33:ff:da:81:46, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ffda:8146: ICMP6, neighbor solicitation, who has fe80::9218:7cff:feda:8146, length 24
2014-01-09 12:22:45.632341 90:18:7c:da:81:46 > 33:33:ff:da:81:46, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ffda:8146: ICMP6, neighbor solicitation, who has fe80::9218:7cff:feda:8146, length 24
2014-01-09 12:28:42.093714 40:b0:fa:6e:ed:87 > 33:33:ff:6e:ed:87, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ff6e:ed87: ICMP6, neighbor solicitation, who has fe80::42b0:faff:fe6e:ed87, length 24
2014-01-09 12:28:43.283808 40:b0:fa:6e:ed:87 > 33:33:ff:f6:11:99, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:fff6:1199: ICMP6, neighbor solicitation, who has 2001:db8:cafe:face:903d:81ad:64f6:1199, length 24
2014-01-09 12:28:43.764744 40:b0:fa:6e:ed:87 > 33:33:ff:6e:ed:87, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ff6e:ed87: ICMP6, neighbor solicitation, who has 2001:db8:cafe:face:42b0:faff:fe6e:ed87, length 24
2014-01-09 12:32:04.173900 d4:20:6d:dc:82:89 > 33:33:ff:dc:82:89, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ffdc:8289: ICMP6, neighbor solicitation, who has 2001:db8:cafe:face:d620:6dff:fedc:8289, length 24

 

Now, its fairly easy to find an IPv6 address. Simply use grep such as:

weberjoh@jw-nb09:~$ cat dad_2014-01-all.txt | grep 2001:db8:cafe:face:c09:863b:fe6a:3a9b
2014-01-06 11:41:23.989316 c0:63:94:76:75:dc > 33:33:ff:6a:3a:9b, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ff6a:3a9b: ICMP6, neighbor solicitation, who has 2001:db8:cafe:face:c09:863b:fe6a:3a9b, length 24
2014-01-06 12:58:17.435834 c0:63:94:76:75:dc > 33:33:ff:6a:3a:9b, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ff6a:3a9b: ICMP6, neighbor solicitation, who has 2001:db8:cafe:face:c09:863b:fe6a:3a9b, length 24
2014-01-07 10:00:06.395379 c0:63:94:76:75:dc > 33:33:ff:6a:3a:9b, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ff6a:3a9b: ICMP6, neighbor solicitation, who has 2001:db8:cafe:face:c09:863b:fe6a:3a9b, length 24
2014-01-07 10:03:45.842748 c0:63:94:76:75:dc > 33:33:ff:6a:3a:9b, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ff6a:3a9b: ICMP6, neighbor solicitation, who has 2001:db8:cafe:face:c09:863b:fe6a:3a9b, length 24
2014-01-08 14:04:03.688527 c0:63:94:76:75:dc > 33:33:ff:6a:3a:9b, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ff6a:3a9b: ICMP6, neighbor solicitation, who has 2001:db8:cafe:face:c09:863b:fe6a:3a9b, length 24
2014-01-08 14:06:44.922273 c0:63:94:76:75:dc > 33:33:ff:6a:3a:9b, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ff6a:3a9b: ICMP6, neighbor solicitation, who has 2001:db8:cafe:face:c09:863b:fe6a:3a9b, length 24
2014-01-08 15:46:30.434536 c0:63:94:76:75:dc > 33:33:ff:6a:3a:9b, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ff6a:3a9b: ICMP6, neighbor solicitation, who has 2001:db8:cafe:face:c09:863b:fe6a:3a9b, length 24
2014-01-09 09:41:32.377631 c0:63:94:76:75:dc > 33:33:ff:6a:3a:9b, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ff6a:3a9b: ICMP6, neighbor solicitation, who has 2001:db8:cafe:face:c09:863b:fe6a:3a9b, length 24
2014-01-09 10:11:30.264455 c0:63:94:76:75:dc > 33:33:ff:6a:3a:9b, ethertype IPv6 (0x86dd), length 78: :: > ff02::1:ff6a:3a9b: ICMP6, neighbor solicitation, who has 2001:db8:cafe:face:c09:863b:fe6a:3a9b, length 24

This output shows the MAC address of the client and the times it sent out a DAD message, which happened a few times during the 06. and the 09. of January.

–> At this point, the mere ideas of monitoring MAC-IPv6 address bindings are complete. However, with some basic Linux tools the gathered information can be presented a bit better:

Table with Time, MAC & IPv6 Addresses

To only show the date & time, the MAC and the IPv6 address, “sed” with a few regular expressions (

sed s/regexp/replacement/
 ) can be used to eliminate all other pieces of code:
weberjoh@jw-nb09:~$ cat dad_2014-01-all.txt | sed s/\>.*has.// | sed s/,.*//
2014-01-06 11:35:07.974431 40:b0:fa:6e:ed:87 fe80::42b0:faff:fe6e:ed87
2014-01-06 11:35:09.294228 40:b0:fa:6e:ed:87 2001:db8:cafe:face:42b0:faff:fe6e:ed87
2014-01-06 11:35:09.915232 40:b0:fa:6e:ed:87 2001:db8:cafe:face:5899:56a0:50ad:dec5
2014-01-06 11:41:23.989316 c0:63:94:76:75:dc 2001:db8:cafe:face:c09:863b:fe6a:3a9b
2014-01-06 11:41:24.059192 c0:63:94:76:75:dc fe80::8a7:8b0f:728:8c06
2014-01-06 11:41:24.711347 c0:63:94:76:75:dc 2001:db8:cafe:face:212c:bf2d:ebc6:3dfb
2014-01-06 11:45:05.045846 80:ea:96:13:1f:05 2001:db8:cafe:face:1435:f852:f26d:92fc
2014-01-06 11:45:05.382753 80:ea:96:13:1f:05 fe80::c64:dd83:1c2c:cb54
2014-01-06 11:45:05.386313 80:ea:96:13:1f:05 2001:db8:cafe:face:f5ad:3376:5329:a493
2014-01-06 11:52:31.769566 00:13:02:ac:11:fa fe80::8c9e:11cb:6d84:ce8f

This gives a kind of table which shows all MAC addresses on the network that requested an IPv6 address.

For a better structure of this table, “sort” and “uniq” can be used to sort the table based on the MAC addresses and to omit doubled entries. That is, the following table shows for each MAC address all requested IPv6 addresses, sorted by MAC address and by time:

weberjoh@jw-nb09:~$ cat dad_2014-01-all.txt | sed s/\>.*has.// | sed s/,.*// | sort -k 3 | uniq -f 2
2014-01-07 09:57:01.595410 00:13:02:ab:f7:38 2001:db8:cafe:face:2833:2afa:cf2c:cf61
2014-01-07 09:57:01.595433 00:13:02:ab:f7:38 2001:db8:cafe:face:40c9:9a6b:90ae:6e37
2014-01-09 09:34:17.332011 00:13:02:ab:f7:38 2001:db8:cafe:face:a470:98b7:79a5:2d85
2014-01-07 09:56:53.077369 00:13:02:ab:f7:38 fe80::2833:2afa:cf2c:cf61
2014-01-07 13:21:15.087628 00:13:02:ac:11:fa 2001:db8:cafe:face:5cb7:52b8:82de:bce7
2014-01-06 11:52:32.269169 00:13:02:ac:11:fa 2001:db8:cafe:face:8c9e:11cb:6d84:ce8f
2014-01-06 11:52:32.269192 00:13:02:ac:11:fa 2001:db8:cafe:face:bdd6:2e3d:b3df:ed0d
2014-01-07 13:28:25.903966 00:13:02:ac:11:fa 2001:db8:cafe:face:d935:32a0:cca9:b9d1
2014-01-06 11:52:31.769566 00:13:02:ac:11:fa fe80::8c9e:11cb:6d84:ce8f
2014-01-07 16:18:28.117644 00:13:e8:ad:4e:1f 2001:db8:cafe:face:8128:c847:2717:8a20
2014-01-09 09:11:08.203774 00:13:e8:ad:4e:1f 2001:db8:cafe:face:821:5574:9b5d:daaf

 

Grep a MAC Address

Of course, a MAC address can be grepped in the logfile to show its corresponding IPv6 addresses, too. If it is used in conjunction with the above mentioned sed, sort, and uniq pipes, a sample output looks like that:

weberjoh@jw-nb09:~$ cat dad_2014-01-all.txt | grep 40:b0:fa:6e:ed:87 | sed s/\>.*has.// | sed s/,.*// | sort -k 3 | uniq -f 2
2014-01-06 11:35:09.294228 40:b0:fa:6e:ed:87 2001:db8:cafe:face:42b0:faff:fe6e:ed87
2014-01-06 11:35:09.915232 40:b0:fa:6e:ed:87 2001:db8:cafe:face:5899:56a0:50ad:dec5
2014-01-09 08:32:48.575847 40:b0:fa:6e:ed:87 2001:db8:cafe:face:903d:81ad:64f6:1199
2014-01-06 11:35:07.974431 40:b0:fa:6e:ed:87 fe80::42b0:faff:fe6e:ed87

 

Disadvantages & Alternatives

The way of using Tcpdump as a network sniffer is more a proof-of-concept than a real solution for enterprises. It might fit for a guest Wifi but not for productive environments.

Disadvantages of this Method

  • Only on a single local network: Since the DAD messages are sent inside a layer 2 infrastructure (and not routed somewhere else), the Tcpdump sniffer receives only messages from the directly connected network. That is: It does not scale at all! It would require a sniffer in every subnet to completely cover the whole network.
  • Not against “real” attackers: This method only fits for normal users that might unintentionally do some malicious actions. It will not work for the capturing of MAC addresses from hackers who could omit the DAD message or simply change their hardware MAC address at all.
  • If DAD fails for the IPv6 node: Consider the really rare situation in which an IPv6 node wants to register an IPv6 address that is already used. This is no problem for the node since it generates another one. However, in the DAD logs, the DAD message for this (already used) IPv6 address appears with the MAC address of the new IPv6 node. On a subsequent analysis of the DAD logs, this IPv6 address could misleadingly point to the MAC address that did not get this address.

Alternatives

  • Switch, Switch, and again: Switch: It should be clear that countermeasures for layer 2 attacks or layer 2 monitoring at all should be deployed on the layer 2 infrastructure as well. A passively sniffing host (such as used in this PoC) will not see everything on the network but mostly “broadcast” messages. That is, the concept of first hop security in the switches is the answer for these situations. (Cisco paper here.) The switch is able to see every neighbor discovery messages from all clients directly on its switchports. Technologies such as IPv6 guard in general, RA guard, DHCPv6 snooping, NDP monitor, etc. should be used to monitor and thwart these types of threats.
  • NDPMon: The free Neighbor Discovery Protocol Monitor is a complete monitor package which can be used to find anomalies in different ICMPv6 messages such as router- and neighbor advertisements, etc. But since it runs on a host in the network (and not directly on a switch), its usefulness might be decreased, too. However, for obvious attacks and the monitoring of MAC-IPv6 address bindings, it completely fits. The following screenshot shows the neighbor table of NDPMon which I tested in my lab. This gives a nice overview of the IPv6 addresses used by different clients, even with timestamps and aggregated by MAC address:
    NDPMon Neighbors partial table
  • SLAACer: Daemon that receives IPv6 traffic from many switch mirroring SPAN ports and generates syslog/SQL messages based on Neighbor Advertisements. In the best case, it is configured to receive *any* NA messages and therefore has a *complete* view of the MAC-IPv6 address bindings on the network. Here is the project homepage.
  • ipv6mon: The SI6 Networks’ Address Monitoring Daemon listens for DAD messages, too, but also sends multicast probes to further find nodes on the net.
  • Querying the Routers Neighbor Cache: Another option is to query the routers neighbor cache since it stores all IPv6 neighbors that made connections through it. This requires certain ressources on the router. The disadvantage is, that the router doesn’t store IPv6 neighbors for connections that were established inside the local subnet.
  • DHCPv6 Server: Of course, it is not mandatory to use SLAAC at all. A stateful DHCPv6 server could be used to allocate IPv6 addresses which is then in the position to store all requesting MAC addresses. However, this would not limit the procedure of hackers that could still use static IPv6 addresses.

Conclusion

With this post I showed the basic functionality of the duplicate address detection messages and how they can be stored to maintain a list of MAC-to-IPv6 address bindings on a layer 2 network. This solution fits for small IPv6 networks (and might train the understanding of IPv6 at all), but is not meant to be used in security-relevant enterprise environments. Dedicated intrusion detection/prevention systems (IDS/IPS) and security information and event management (SIEM) systems should be used to monitor critical networks.

Links for Further Reading

Here are a few links for further reading. Since my references for this article are always inline, I am only listing further links here:

Palo Alto GlobalProtect for Linux with vpnc

$
0
0

This is a tutorial on how to configure the GlobalProtect Gateway on a Palo Alto firewall in order to connect to it from a Linux computer with vpnc.

Short version: Enable IPsec and X-Auth on the Gateway and define a Group Name and Group Password. With this two values (and the gateway address), add a new VPN profile within vpnc on the Linux machine. Login with the already existing credentials.

Long version with screenshots comes here:

I assume that an already working GlobalProtect configuration is in place. The tested PAN-OS version was 6.0.1.

Configuration Palo Alto

The main step is the activation of IPsec (which is useful for the mere GlobalProtect client, too), and the X-Auth Support on the GlobalProtect Gateway. A group name and group password must be set, just like the VPN-Client settings on a Cisco ASA firewall.

GlobalProtect vpnc - Enable X-Auth

Furthermore, the “from untrust to untrust” security policy must be expanded with at least the application “ciscovpn“.  But due to the application dependency warnings after a successful commit on the PA, it is less annoying if “dtls” and all the other dependencies for ciscovpn are allowed, too, though they are not needed. In this way, the commit warnings can be reduced.

That is, I am permitting the following applications for the complete GlobalProtect process, incl. GlobalProtect client, etc.:

GlobalProtect vpnc - Security Policy

Linux: vpnc

I ran a Ubuntu 13.10 with Linux kernel 3.11.0-18 on my test machine.

The following two applications must be installed:

sudo apt-get install vpnc network-manager-vpnc

To add a VPN connection, click on the network symbol in the upper right corner: VPN-Connections -> VPN configuration -> Add -> Cisco VPN-Client (vpnc). Give it a name and fill in the gateway name/address, the username and the groupname & -password of the just configured GlobalProtect Gateway (sorry for the German screenshot):

GlobalProtect vpnc - Linux VPN

Test

To connect to the VPN endpoint, click on the new VPN profile and type in your account name and password. After a few seconds, the VPN tunnel should be established.

Here are two listings of the IP address of my Linux test machine (

ip a s
) and the routing table (
ip r s
). The first two outputs reveal the values before the VPN tunnel is established:
weberjoh@JW-NB01-Ubuntu:~$ ip a s
1: lo: <loopback,up,lower_up> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <no-carrier,broadcast,multicast,up> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 00:15:c5:16:6a:d2 brd ff:ff:ff:ff:ff:ff
3: wlan0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:13:02:47:49:37 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.36/24 brd 192.168.1.255 scope global wlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::213:2ff:fe47:4937/64 scope link 
       valid_lft forever preferred_lft forever

weberjoh@JW-NB01-Ubuntu:~$ ip r s
default via 192.168.1.1 dev wlan0  proto static 
192.168.1.0/24 dev wlan0  proto kernel  scope link  src 192.168.1.36  metric 9

 

While the following shows the values within the VPN tunnel. A new tun0 interface is present and the default route points to that tun0 interface:

weberjoh@JW-NB01-Ubuntu:~$ ip a s
1: lo: <loopback,up,lower_up> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <no-carrier,broadcast,multicast,up> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 00:15:c5:16:6a:d2 brd ff:ff:ff:ff:ff:ff
3: wlan0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:13:02:47:49:37 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.36/24 brd 192.168.1.255 scope global wlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::213:2ff:fe47:4937/64 scope link 
       valid_lft forever preferred_lft forever
4: tun0: <pointopoint,multicast,noarp,up,lower_up> mtu 1412 qdisc pfifo_fast state UNKNOWN qlen 500
    link/none 
    inet 192.168.126.1/32 brd 192.168.126.1 scope global tun0
       valid_lft forever preferred_lft forever

weberjoh@JW-NB01-Ubuntu:~$ ip r s
default dev tun0  proto static 
80.154.108.228 via 192.168.1.1 dev wlan0  proto static 
192.168.1.0/24 dev wlan0  proto kernel  scope link  src 192.168.1.36  metric 9

 

And by the way: the DNS server in /etc/resolv.conf is NOT changed during the VPN connection. Only the search domain (DNS suffix) correspondent to the network settings in the GlobalProtect Gateway is appended.

Here are some screenshots of the Palo Alto firewall: The first one shows the Gateway Remote Users with a client of “Linux…”, while the second screenshot shows the System Log with detailed information about the GlobalProtect session: It is recognized as a Cisco VPN Client. After the finished session, the Traffic Log shows at least two sessions with “ciscovpn”, one on port 500 (IKE) and one on port 4500 (ESP inside UDP).

Remote Users System Log Traffic Log

And as always, I am using my http://ip.webernetz.net script to show my current Internet IP address which reveals in this case, that I am surfing through the Palo Alto ISP connection.

Links


Basic syslog-ng Installation

$
0
0

This post shows a guideline for a basic installation of the open source syslog-ng daemon in order to store syslog messages from various devices in a separate file for each device.

I am using such an installation for my routers, firewalls, etc., to have an archive with all of its messages. Later on, I can grep through these logfiles and search for specific events. Of course it does not provide any built-in filter or correlation features – it is obviously not a SIEM. However, as a first step, I think it’s better than nothing. ;)

Prerequisites

This tutorial relies on a blank Linux (server) installation such as shown here. I am using a Ubuntu server. I furthermore assume that the reader is aware of its devices that are capable of sending syslog messages. That is: I am only showing the syslog-ng installation and no further details on how to send syslog messages from various devices to the server.

Installation

The first step is to install the syslog-ng package. I am using an Ubuntu server:

sudo apt-get install syslog-ng

The default configuration file is

/etc/syslog-ng/syslog-ng.conf
. On Ubuntu, it has already a few lines that generate logfiles under

/var/log/
, e.g., the basic logfile “syslog”, which can be tailed with
tail -f /var/log/syslog
  to see incoming messages from the system itself.

Configuration

I will now show the basic configuration of syslog-ng along with a template for devices in order to:

  1. have an own folder for each device with
  2. a new file every day, nested in folders for year and month.

For more detailed configuration commands, this wiki from archlinux gives many good examples.

Source

Since the last lines in the “syslog-ng.conf” file (

/etc/syslog-ng/syslog-ng.conf
 ) end with
@include "/etc/syslog-ng/conf.d/"
, all configuration files in the folder “conf.d” will be processed, too. Therefore, I generated a new configuration file called “firewalls.conf” in that subfolder. It has the following lines in it:

(Note: Replace USERNAME and USERGROUP with the name and group of the account from which the logfiles should be wrote to the disk.)

##################################################
options {
        create_dirs(yes);
        owner(USERNAME);
        group(USERGROUP);
        perm(0640);
        dir_owner(USERNAME);
        dir_group(USERGROUP);
        dir_perm(0750);
};


##################################################
source s_udp {
        udp(port(514));
};

This “source s_udp” object is quite general and simply listens on udp port 514 for incoming syslog messages. It must appear only once in the config file.

Filter & Destination

Now its time for the template. The only thing to change is the two UPPER CASE variables in the following lines:

#Template for a new firewall in the firewalls.conf file
#Entries to be changed: NAMEOFTHEFIREWALL and IPOFTHEFIREWALL

##################################################
filter f_NAMEOFTHEFIREWALL {
        host("IPOFTHEFIREWALL");
};
destination d_NAMEOFTHEFIREWALL {
        file("/var/log/firewalls/NAMEOFTHEFIREWALL/$YEAR/$MONTH/$YEAR-$MONTH-$DAY.NAMEOFTHEFIREWALL.log");
};
log {
        source(s_udp);
        filter(f_NAMEOFTHEFIREWALL);
        destination(d_NAMEOFTHEFIREWALL);
};

That is:

  • the filter “f_NAMEOFTHEFIREWALL” filters upon the source IP address from the sending device,
  • the destination “d_NAMEOFTHEFIREWALL” is set to the hierarchical path,
  • and finally, the “log” sequence takes any messages from the source and uses the filter to store into the destination path.

These few lines in the template can appear many times in the config file. (Remember: the source s_udp must appear only once.) So you can copy & paste it for every syslog device.

A restart of the syslog-ng daemon is required to have the just added configuration active:

sudo service syslog-ng restart

After that,

netstat -l
  should show a line similar to the following one which reveals that the port 514 is listening:
weberjoh@jw-nb10:/etc/syslog-ng/conf.d$ netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
udp        0      0 *:syslog                *:*

 

Now, after adding all my devices to the config file, they are all logging into the syslog-ng server. The paths are quite long but structured, e.g.:

/var/log/firewalls/fd-wv-fw02/2014/07/2014-07-21.fd-wv-fw02.log

Examples

This is how syslog messages from my Palo Alto firewall look like when changing some policy rules:

Jul 21 16:14:01 192.168.120.2 1,2014/07/21 16:14:01,001234567891,CONFIG,0,0,2014/07/21 16:14:01,192.168.125.41,,edit,weberjoh,Web,Succeeded, vsys  vsys1 rulebase security rules  Ping Untrust Trust Deny,2429,0x0
Jul 21 16:14:58 192.168.120.2 1,2014/07/21 16:14:58,001234567891,CONFIG,0,0,2014/07/21 16:14:58,192.168.125.41,,edit,weberjoh,Web,Succeeded, vsys  vsys1 rulebase security rules  GlobalProtect an Untrust,2430,0x0
Jul 21 16:16:22 192.168.120.2 1,2014/07/21 16:16:22,001234567891,CONFIG,0,0,2014/07/21 16:16:22,192.168.125.41,,edit,weberjoh,Web,Succeeded, vsys  vsys1 rulebase security rules  SMTP ESA to Postfix,2431,0x0
Jul 21 16:18:06 192.168.120.2 1,2014/07/21 16:18:06,001234567891,CONFIG,0,0,2014/07/21 16:18:06,192.168.125.41,,edit,weberjoh,Web,Succeeded, vsys  vsys1 log-settings profiles  lfp-standard,2432,0x0
Jul 21 16:19:35 192.168.120.2 1,2014/07/21 16:19:35,001234567891,SYSTEM,general,0,2014/07/21 16:19:35,,general,,0,0,general,informational,"Commit job started, user=weberjoh, command=commit, client type=2, .",240518390879,0x0
Jul 21 16:19:35 192.168.120.2 1,2014/07/21 16:19:35,001234567891,CONFIG,0,0,2014/07/21 16:19:35,192.168.125.41,,commit,weberjoh,Web,Submitted,,2433,0x0

 

And here are some Juniper ScreenOS logs during active Internet connections:

Jul 21 16:37:45 172.16.1.1 fd-wv-fw01: NetScreen device_id=fd-wv-fw01  [Root]system-notification-00257(traffic): start_time="2014-07-21 16:37:42" duration=3 policy_id=2 service=http proto=6 src zone=DMZ dst zone=Untrust action=Permit sent=304 rcvd=148 src=192.168.110.17 dst=62.138.108.130 src_port=53709 dst_port=80 src-xlated ip=192.168.110.17 port=53709 dst-xlated ip=62.138.108.130 port=80 session_id=6290 reason=Close - TCP FIN
Jul 21 16:37:45 172.16.1.1 fd-wv-fw01: NetScreen device_id=fd-wv-fw01  [Root]system-notification-00257(traffic): start_time="2014-07-21 16:37:12" duration=33 policy_id=1 service=https proto=6 src zone=Trust dst zone=Untrust action=Permit sent=5699 rcvd=7283 src=192.168.125.41 dst=193.24.224.29 src_port=10221 dst_port=443 src-xlated ip=192.168.125.41 port=10221 dst-xlated ip=193.24.224.29 port=443 session_id=6023 reason=Close - TCP RST
Jul 21 16:37:45 172.16.1.1 fd-wv-fw01: NetScreen device_id=fd-wv-fw01  [Root]system-notification-00257(traffic): start_time="2014-07-21 16:37:12" duration=33 policy_id=1 service=https proto=6 src zone=Trust dst zone=Untrust action=Permit sent=4947 rcvd=6531 src=192.168.125.41 dst=193.24.224.29 src_port=10219 dst_port=443 src-xlated ip=192.168.125.41 port=10219 dst-xlated ip=193.24.224.29 port=443 session_id=7902 reason=Close - TCP RST
Jul 21 16:37:45 172.16.1.1 fd-wv-fw01: NetScreen device_id=fd-wv-fw01  [Root]system-notification-00257(traffic): start_time="2014-07-21 16:37:12" duration=33 policy_id=1 service=https proto=6 src zone=Trust dst zone=Untrust action=Permit sent=4707 rcvd=6297 src=192.168.125.41 dst=193.24.224.29 src_port=10220 dst_port=443 src-xlated ip=192.168.125.41 port=10220 dst-xlated ip=193.24.224.29 port=443 session_id=7667 reason=Close - TCP RST

 

That’s it. ;)

Stromzähler mit S0-Schnittstelle vom Raspberry Pi auswerten

$
0
0

Endlich ist es soweit: Ich lese den Stromverbrauch von unserer Wohnung mit einem Raspberry Pi aus und lasse mir von meinem Monitoring Server (MRTG + Routers2) schöne Graphen malen. Hierfür verwende ich einen Stromzähler mit einer S0-Schnittstelle, welchen ich direkt in der Unterverteilung eingebaut habe. Die Impulse des “Smart Meters” wertet eine Interruptroutine am Pi aus. Der Monitoring Server wiederum fragt den Pi per SNMP ab. Viele kleine Schritte also, die ich in diesem Blogpost ausführlich erläutern möchte. Viel Spaß damit!

Die Idee

Folgenden “Pfad” nehmen die Stromdaten vom Zähler bis zum Monitoring System:

  1. Der Stromzähler (3 Phasen) erzeugt 800 Impulse pro kWh und gibt diese an einer S0-Schnittstelle aus. Sprich: Ein Optokoppler, der bei jedem Impuls kurzschließt.
  2. Mit zwei Adern geht’s dann zum Raspberry Pi, welcher einfach nur eine Textdatei mit einem Counter bei jedem Impuls um eins hoch setzt. (Der Interrupt löst ein kleines Programm aus, welches die Datei beschreibt.)
  3. Der bereits vorhandene Monitoring Server fragt schlussendlich den Pi per SNMP ab, sprich: es wird ein zusätzlich Skript gestartet, welches einfach den Inhalt der Textdatei ausgibt und in einer OID per SNMP transportiert.

Die Hardware

Hier ein paar Bilder von der Hardware. Ich habe einen Drehstromzähler von Eltako gekauft, genaue Bezeichnung: DSZ12E-3x80A. Kostenpunkt: ca. 90,- €. Dieser Stromzähler ist NACH dem geeichten und offiziellen Stromzähler der Stadtwerke im Keller eingebaut. Somit gibt es keine Probleme mit der Stromabrechnung. Mit einer Standard Telefon-Doppelader (oben rechts im Bild, gelb und weiß aus dem Sicherungskasten) geht die S0-Schnittstelle dann an den Raspberry Pi.

Sicherungskasten vorher Stromzähler grob angeschlossen Deckel drauf Nahaufnahme Stromzähler Pi von der Seite Pi unter Arbeitsplatte

Die Software

Eine erste Googelei hatte mich auf ein Projekt von volkszaehler mit dem Artikel S0-Impuls Zähler direkt über RS232 auswerten gebracht. Dabei bin ich der Anleitung in dem grünen Kasten, der speziell für den Raspberry Pi gedacht ist, gefolgt. Also den GPIO Port auf die Alternative Funktion 3 umbiegen und dann die 

stty
  und
strace
 Befehle. Leider hatte das nicht funktioniert. Und da ich nicht genügend Lust hatte, ewig weiter zu testen, habe ich eben weiter gegoogelt.

S0-Auswertung am Pi

Erfreulicherweise bin ich dann über den Artikel S0-Stromzähler am RaspberryPi gestoßen, welcher exakt das beschreibt, was ich vor hatte. Und das hat sogar ziemlich einfach funktioniert. Juchu! Also:

  1. S0+ und S0- an die richtigen Ports anschließen (gar nicht so einfach, da die Benennung der GPIO-Ports variiert und ungleich der Zählung der Anschlüsse auf dem Pi ist, sowie die Zählung der Ports für Wiring Pi noch mals verschieden ist). Bei mir ist es folgendermaßen: Stromzähler Pin 20 = S0+ (weißes Kabel) = GPIO 3 -> Pin 5, sowie: Stromzähler Pin 21 = S0- (gelbes Kabel) = GND -> Pin 6. Uff.
  2. Wiring Pi per git clonen.
  3. Die ISR.c Datei mit dem richtigen #define kompilieren. (Bei mir ist es
    #define BUTTON_PIN 9
    ). Achtung: Nicht die isr.c Datei in dem Unterordner “examples” von Wiring Pi verwenden, sondern die, die hinter dem Link steckt.

Um aber den Zählerstand korrekt in eine Datei zu schreiben, muss die C Datei noch etwas angepasst werden (Link: Datei schreiben): Im main Bereich noch eine Variable deklarieren:

FILE *datei;
und dann anstelle der printf Ausgabe folgendes hinzufügen, damit der aktuelle Counter-Wert in die Datei geschrieben wird:
//    printf (" Done. counter: %5d\n", globalCounter) ;
    datei = fopen ("/var/strom/stromcounter", "w");
    fprintf (datei, "%d\n", globalCounter);
    fclose (datei);

Ich habe dann noch einen Ordner für die Logdatei erstellt:

sudo mkdir /var/strom
, das Programm kompiliert:
gcc -lwiringPi -o stromzaehler stromzaehler.c
  und verschoben:
sudo mv stromzaehler /usr/local/bin/
. Per
sudo stromzaehler
  lief es dann. :)

Nun muss das Programm noch per Autostart beim Booten vom Pi ausgeführt werden (Link: Dienste). Dabei habe ich mich an diese Anleitung gehalten. Folgende Datei habe ich als “/etc/init.d/stromzahler” angelegt (

sudo nano /etc/init.d/stromzaehler
 ):
#! /bin/sh
### BEGIN INIT INFO
# Provides:          Stromzaehler starten: S0-Schnittstelle per ISR auswerten und Textdatei schreiben
# Required-Start:
# Required-Stop:
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Kurze Beschreibung
# Description:       Längere Bechreibung
### END INIT INFO
# Author: Johannes Weber (johannes@webernetz.net)

# Aktionen
case "$1" in
    start)
        /usr/local/bin/stromzaehler &
#        /opt/beispiel start
        ;;
    stop)
        killall stromzaehler
#        /opt/beispiel stop
        ;;
    restart)
        killall stromzaehler
        /usr/local/bin/stromzaehler &
#        /opt/beispiel restart
        ;;
esac

exit 0

Und dann (wie in dem Link beschrieben) erst die Rechte auf die Datei korrekt vergeben, damit sie ausführbar ist:

sudo chmod 755 stromzaehler
, und den Runlevel für das automatische Ausführen beim Booten hinzufügen:
sudo update-rc.d stromzaehler defaults
. (Jetzt befindet sich unter anderem im Ordner /etc/rc2.d ein symbolischer Link mit dem Namen “S01stromzaehler” auf die eben erstellte stromzaehler Datei im init.d Ordner.)

Mit den folgenden Befehlen kann man nun den Service stoppen, starten, oder restarten (wobei das gar nicht nötig ist, da er ja beim Booten automatisch startet und immerzu laufen soll) [Außerdem stelle ich gerade fest, dass der "restart" gar nicht richtig funktioniert. Der Dienst wird zwar abgeschossen, aber nicht wieder gestartet. Hm. Egal, ich will ja nur, dass es beim Booten automatisch läuft]:

sudo service stromzaehler stop
sudo service stromzaehler start
sudo service stromzaehler restart

Den aktuellen Stromzählerwert (Erinnerung: 800 Impulse/kWh) bekommt man so:

cat /var/strom/stromcounter

LÄUFT! Juchu. ;)

(Wen es übrigens noch genauer interessiert: Ich finde den Unterschied der beiden von mir verlinkten Programme interessant: Während das erstere einfach Spannung an die S0-Schnittstelle anlegt und dann auf der anderen Leitung misst, wann Spannung ankam, verwendet das zweite Projekt einen Interrupt per Pull-Down. Sprich: Es liegt konstant Spannung an, und per S0-Schnittstelle wird diese Spannung auf 0 gezogen. Während bei erstem also der zweite Pin eine Spannung wahrnimmt, registriert der Pi bei zweiterem den Spannungsverlust am ersten Pin.)

Auslesen via SNMP

Um auf den Inhalt der Textdatei via SNMP zuzugreifen muss lediglich ein extended Skript in der snmpd.conf eingebunden werden. Dies hatte ich bei meinem Temperatursensor genau so schon mal gemacht. Also bitte dort für weitere Details nachlesen. Hier nur kurz die Anleitung:

Konfigurationsdatei auf dem Pi öffnen:

sudo nano /etc/snmp/snmpd.conf
und unter “EXTENDING THE AGENT” folgende Zeile hinzufügen:
extend-sh stromcounter cat /var/strom/stromcounter

Danach den snmpd service neustarten:

sudo service snmpd restart
. Unter welcher OID der Wert dann steht findet man z.B. mit snmpwalk oder dem von mir schon oft angepriesenen iReasoning MIB Browser heraus.

Einbinden in MRTG

Hier die Konfigurationszeilen für MRTG und Routers2.cgi. Als Farbe habe ich mich für ein sattes Schwarz entschieden. Dies hatte ich noch nirgends anders verwendet und irgendwie passt es auch zur Erzeugung des Stroms durch Kohle. :D

routers.cgi*Icon: house-sm.gif
routers.cgi*ShortDesc: Stromverbrauch

#OID per snmpwalk herausgefunden
Target[strom-fdorf]: 1.3.6.1.4.1.8072.1.3.2.4.1.2.12.115.116.114.111.109.99.111.117.110.116.101.114.1&PseudoZero:d83lykUUdiqhdz@192.168.86.5:::::2
#800 Impulse pro kWh -> MaxBytes (pro Sekunde) auf 5 gesetzt = 3600*5/800 entspricht 22,5 kW (das sollte reichen ;))
MaxBytes[strom-fdorf]: 5
Title[strom-fdorf]: Stromverbrauch Friedrichsdorf
#Gespeichert wird in Raw, also 800 Impulse pro kWh. Anzeige aber in Wh, daher mal 1.25. Somit entspricht einem faktorisiertem Impuls genau 1 Wh.
Factor[strom-fdorf]: 1.25
#Ausgabe aber immer in Leistung pro Stunde (und nicht pro Sekunde)
Options[strom-fdorf]: perhour
Colours[strom-fdorf]: BLACK#000000, YELLOW#FFD600, RED#FF0000, ORANGE#FC7C01
YLegend[strom-fdorf]: Watt
Legend1[strom-fdorf]: Leistung
Legend3[strom-fdorf]: Peak Leistung
LegendI[strom-fdorf]: Leistung:
ShortLegend[strom-fdorf]: W
routers.cgi*Options[strom-fdorf]: nomax noo
routers.cgi*TotalLegend[strom-fdorf]: Wh
routers.cgi*ShortDesc[strom-fdorf]: Fdorf
#Kommentar unter jedem Graph, weil ich mir das sonst nicht merken kann:
routers.cgi*Comment[strom-fdorf]: 1x Peak Waschmaschine, 2x Peak Spülmaschine

 

An dieser Stelle sei noch erwähnt, dass MRTG lediglich alle 5 Minuten per SNMP den Zählerstand vom Pi holt. Sprich: Es wird immer nur der Durchschnittswert der letzten 5 Minuten gespeichert. Ein ernsthafter Peak im Stromverbrauch von beispielsweise 5 kW über eine Minute würde doch nur als 1 kW über 5 Minuten angezeigt werden. Aber gut, damit kann ich leben.

Das Ergebnis

Hier drei Beispielgraphen von meinem Monitoring Server. Die ersten beiden in der “Daily” Ansicht und mit ein paar Infos zu den verwendeten Stromverbrauchern. Der letzte dann in der “Monthly” Ansicht mit der roten Kennlinie für die Peaks und dem schwarzen Block weiterhin für den Durchschnitt. Man sieht schön, in welcher Woche wir im Urlaub waren und an welchen Tagen weder Spül- noch Waschmaschine liefen, da kein Peak über 1 kW geht. (Leider sieht man auch, an welchen Tagen ich den Pi aus Versehen nicht am Laufen hatte. Pech.)

Stromverbrauch Friedrichsdorf Stromverbrauch Friedrichsdorf 2 Stromverbrauch Friedrichsdorf monthly

Die Zukunft

Tja, also konsequent wäre es jetzt noch, die Wasserzähler auch gegen solche mit S0-Schnittstelle auszutauschen. Leider ist das alles andere als einfach. :( Entweder müsste man einen geeichten/offiziellen Zähler einbauen lassen (was teuer wäre), oder man müsste einen zusätzlichen Wasserzähler mit S0 einbauen (was ich nicht selber kann). Außerdem haben wir in unserer Mietwohnung insgesamt vier Wasserzähler (2x kalt, 2x warm, jeweils in Küche und Bad). Also wird das wohl nichts. Oder zumindest erst im Eigenheim. Aber das kann auch noch dauern. Dann aber auch bitte mit Temperatursensoren am Vor- und Rücklauf, usw. :)

Low-Budget Zeitraffer in Full HD erstellen

$
0
0
Zeitraffer Panorama 604

Neben dem normalen Fotografieren und Filmen finde ich zwei Arten von Videos sehr interessant, nämlich Slow Motion Filme, bei denen eine schnelle Aktion sehr langsam dargestellt wird, sowie Zeitraffer, bei denen eine langsame Aktion sehr schnell dargestellt wird. Während man für Slow Motion Sequenzen leider teure Hardware braucht, die eine vielfache Frames per Second (fps) Rate als normale Kameras liefern können, kann man Zeitraffer relativ simpel selbst erstellen, in dem man eine Szene lang genug fotografiert und diese Fotos dann zu einem Video zusammenfügt.

Genau das mache ich seit einigen Jahren mit einer alten Canon Digitalkamera und einigen kostenlosen Softwares. Wie genau ich solche Low-Budget Zeitraffer in Full HD erstelle und was dabei zu beachten ist, erkläre ich in diesem Post sehr detailliert. Viel Spaß dabei. :)

Das Endprodukt

Bevor ich ins Reden komme hier mal ein Proof of Concept Video von mir. Ein Zeitraffer über 3 Monate, fotografiert von unserem Balkon:

Grob gesehen geht es bei der Erstellung um folgende drei Schritte:

  1. Fotos schießen: Fotos über einen langen Zeitraum automatisch aufnehmen und speichern
  2. Fotos bearbeiten: Rohdaten zuschneiden (auf Full HD Auflösung) und bearbeiten (zum Beispiel Uhrzeit einblenden)
  3. Zeitraffer erstellen: Den Zeitraffer Clip als Videodatei erstellen und speichern

Vorwort

Aus heutiger Sicht (ich schreibe diesen Post Anfang 2015) muss man zum Erstellen von Zeitraffern natürlich noch sagen, dass es gar nicht mehr so etwas besonderes ist. :) Mir ist schon klar, dass die Kids heute mit ihren iPhones und GoPros noch viel einfacher interessante Videoclips erzeugen können, inklusive Slow Motion und Time-Lapse (Zeitraffer).

Ich habe meinen ersten Zeitraffer im Jahr 2006 erstellt! Zu diesem Zeitpunkt gab es noch nicht mal das erste iPhone. Digitalkameras hatten so ca. 2-3 Megapixel und die ersten Fernseher mit Full HD kamen gerade erst raus. An das Filmen mit Full HD war noch gar nicht zu denken! Zu diesem Zeitpunkt war es daher sehr cool, für unter 100 € solche Full HD Zeitraffer, wie ich sie hier vorstelle, zu erstellen. :)

Also, los geht’s:

Notwendige Hardware

  • Canon Digitalkamera: Ich hatte mich damals ausschließlich um die kleinen Canons gekümmert, weil es mir so erschien, als ob es dafür am meisten Software gibt. Aber man kann natürlich auch andere Kameras nehmen. Ich selber besitze eine PowerShot A520, die ich bei eBay für ca. 30,- € ersteigert hatte. (Ich sehe gerade, dass sie heute bei eBay für unter 10,- € weg gehen.) Da die Auflösung von Full HD ja “nur” 1920*1080 Pixel beträgt (gerade einmal 2 Megapixel), reicht eine Kamera mit ca. 3 Megapixel für diesen Zweck also vollkommen aus. Wichtig ist aber, dass die Kamera über einen Stromanschluss verfügt, damit man sie auch sehr lange betreiben kann. Außerdem sollte es eine Software geben, mit der man die Kamera fernsteuern kann. Bezüglich vieler Canons/Nikons gibt es so etwas. (Vorher informieren!)
  • Notebook: Um die Kamera fernauszulösen braucht man am besten ein altes Notebook, welches einen USB-Port hat. Ich selbst habe im Keller ein altes Dell Notebook gefunden, was mit Windows XP, 1 GHz CPU und 256 MB RAM ausgestattet war. Aus heutiger Sicht also absolut veraltet. Aber für diesen Zweck vollkommen ausreichend.
  • Netzteil für die Digitalkamera: Zum Beispiel ein Universalnetzteil (da deutlich billiger als die originalen Netzteile vom Hersteller), bei welchem man die Spannung einstellen kann. Achtung: Es muss über genügend Leistung verfügen. (Bei mir sind zB bei 3 V satte 2 A nötig!) Auf jeden Fall im Handbuch der Kamera nachgucken, was das Netzteil liefern muss.
  • Stativ für die Kamera: Ich selber verwende neben einem 3-Bein Stativ noch ein kleines Klemmstativ (z.B. von Hama oder InLine), welches ich relativ einfach an irgendwelche Fensterrahmen, Balkongeländern, oder so, montieren kann.
  • Kleinkram: Unter anderem Strom-Verlängerungskabel und Mehrfachsteckdose. Außerdem Gaffatape um die Kabel irgendwie am Geländer oder so befestigen zu können, wenn die Kamera etwas länger dort stehen soll.

Hier mal ein paar Fotos von einem typischen Aufbau meiner Gerätschaften (2x auf dem Dach für einen Zeitraffer vom Sonnenuntergang, 4x während den Aufnahmen für das oben gezeigte Video; alles noch mit einer anderen Kamera, nämlich der PowerShot A400, welche ich vorher besaß):

Notebook plus Canon-Kamera auf Stativ auf Dach. Blick von unten. ;) Stativ an schwerem Objekt festgeklebt, damit es sich wenig bewegt. Notebook leicht unter einem Tisch geschützt. Blick der Kamera. Sie hat wenig Weitwinkel, daher sieht man nachher im Video nur einen Ausschnitt der Bäume. Erster Schnee. Gute Kühlung für die Hardware. ;)

Schritt 0: Sich Gedanken machen

Bevor es mit der eigentlichen Erstellung eines Zeitraffers losgeht muss man sich ein paar Gedanken zu der gewünschten Dauer des Videos machen. Meiner Erfahrung nach sind Zeitraffervideos von länger als 3 Minuten ziemlich schnell langweilig, solange nur der gleiche Ausschnitt gezeigt wird. Daher ist es wichtig von vornherein zu berechnen, wie viele Fotos man denn machen möchte, um bei der resultierenden Länge des Zeitraffers nicht über die Stränge zu schlagen.

Ein Video hat immer eine Frame Rate von 25 Bildern/Sekunde. Möchte man beispielsweise einen Zeitraffer von einer Aktion aufnehmen, die in Wahrheit 12 h dauert, bietet sich ein Aufnahmeintervall von 10 Sekunden an. Somit erzeugt man über den Zeitraum nämlich ca. 4500 Bilder, was bei 25 fps eine Videolänge von 3 Minuten bedeutet.

Hat man nun eine Aktion von beispielsweise 7 Tagen, reicht ein Intervall von ca. 120 Sekunden, also alle 2 Minuten. Somit hat man im Summe wieder knapp 5000 Bilder, was ein Zeitraffervideo von 3:20 Minuten ergeben würde.

Für die Formelfreunde unter uns sieht das so aus: (Zeitangaben jeweils in Sekunden):

Intervall = \frac{DauerDerAktion}{(GewuenschteZeitrafferlaenge \times 25)}

Schritt 1: Fotos schießen

Ich verwende aktuell eine Canon PowerShot A520 Digitalkamera und ein altes Windows XP Notebook. Darauf läuft die kostenlose Software Cam4you remote. Diese wird zwar schon seit Jahren nicht mehr weiterentwickelt, hat bei mir aber immer gute Dienste getan. (Wobei man fairerweise sagen muss, dass sie hin und wieder auch mal abgestürtzt ist, was aber dann mehr an einem Fehler der Digicam denn an der Software lag.) Man kann natürlich jede andere x-beliebige Software zur Fernauslösung verwenden.

Als Auflösung nehme ich immer die vollen 4 Megapixel der Canon, was im 4:3 Format 2272*1704 bedeutet. Sprich: Für das Full HD Video (1920*1080) muss man lediglich etwas in der Breite und einiges in der Höhe abschneiden, hat dann aber die native Auflösung der Bilder verwendet! Das ist sehr wichtig, da eine Verkleinerung der Auflösung im Nachhinein einen deutlichen Qualitätsverlust bedeuten würde!

Wichtig ist außerdem, dass die Uhrzeit der Kamera korrekt eingestellt ist. Im zweiten Schritt lasse ich nämlich das Aufnahmedatum der Bilder einblenden, um einen besseren Bezug zur dargestellten Zeitspanne zu haben. Dabei wird auf die Exif-Daten der Bilder zurückgegriffen, die beim “Abdrücken” geschrieben werden.

Das Verwenden von Cam4you remote dokumentieren die folgenden Screenshots. Für Details bitte die Bildbeschreibungen beachten.:

Start camera control Für eine fortlaufende Benennenung der Fotos (ohne Leerzeichen im Namen) könnten zB folgende Einstellungen gewählt werden. Die Einstellungen der Kamera wie beispielsweise die Fotoqualität oder die Exporsure Compensation. Im Interval Modus wird schließlich die automatische Auslösung aktiviert. Bei unter 10 Sekunden kommt die Software nicht mit dem "Bild runterladen" hinterher, daher wähle ich immer 11 Sekunden. Optional kann man die Fotos per FTP hochladen. Ich hatte das mal verwendet als ich Angst wegen eines Defekts der Notebook Festplatte hatte. Die Einstellungen von Cam4you remote können in einem Preset gespeichert werden. Für ein sauberes Beenden: Stop camera control.

Nachdem man also seine Session fotografiert hat befinden sich entsprechend viele durchnummerierte Fotos in den jeweiligen Ordnern. Damit ist der erste und wichtigstes Schritt getan.

Schritt 2: Fotos bearbeiten

Im zweiten Schritt werden die Fotos vor allem zurecht geschnitten, nämlich auf exakt 1920*1080 Pixel für das Full HD Video. Außerdem wird die Uhrzeit + weiterer Freitext eingeblendet, was ich ziemlich praktisch finde, da man dadurch die tatsächliche Zeit der Aufnahme erkennen kann.

Für die Stapelverarbeitung von so vielen Fotos verwende ich IrfanView, welches ebenfalls kostenlos verfügbar ist. Bei den Einstellungen muss man immer etwas experimentieren: Der Ausschnitt der Fotos muss optisch natürlich gut aussehen (immerhin wird von der Höhe ja einiges weggeschnitten), und auch die Position des Textes kann variiert werden. Hat man die Einstellungen getestet kann man IrfanView zum Beispiel wie folgt verwenden (siehe Bildbeschreibungen):

Das Menü zur Stapelverarbeitung aufrufen. "Batch-Konvertierung + Umbenennen" auswählen. Dann rechts den Ordner der Bilder suchen und "Alle hinzufügen". Zuletzt noch das Zielverzeichnis für die bearbeiteten Fotos auswählen. Zusätzlich kann bei "Zielformat" wieder JPG ausgewählt werden und unter den Optionen "100" als Qualität gesetzt werden. Spezial-Optionen setzen. Freistellen = Zuschnitt auf Full HD mit einem Versatz von X & Y (was man vorher testweise herausgefunden haben sollte). Als zweites den Text hinzufügen (Klick auf Optionen). Auswahl der Position und Größe des Textes (auch wieder viel herumprobieren), sowie des Textes selber. Die erste Zeile im Beispiel blendet das Datum & Uhrzeit der Exif-Daten ein. Darunter eine Freitext Zeile. Zum Schluss das Muster für die Umbenennung wählen. Ich nehme immer #####, was einfach ein fortlaufender Counter ist. Dann Starten! ;) Je nach Anzahl der Fotos dauert es sehr lange, bis alle Bilder bearbeitet worden sind.

Hier noch die eben erwähnte Zeile für das Einblenden des Datums & Uhrzeit:

$E36867(%Y-%m-%d %H:%M)

Das Ergebnis dieses Schrittes ist also ein Ordner mit zig Bildern, die alle im Format 16:9 bei exakt 1920*1080 Pixeln vorliegen. Je nach Einstellung mit ein paar Zeilen Text und der Uhrzeit des Aufnahmedatums.

Schritt 3: Zeitraffer erstellen

Es folgt der interessanteste Schritt, in dem aus den Fotos eine Videodatei erzeugt wird. Ich verwende dafür drei Tools, die ebenfalls alle OpenSource, bzw. kostenlos sind:

  • VirtualDub: Ein Programm zum Erstellen von Videodateien. Es dient als Basis für das Zusammenfügen der Bilder zu einem Video. Man kann das Programm einfach herunterladen, entpacken und ausführen (VirtualDub.exe). Ich verwende die 32-bit Version, zumal es von AviSynth (folgt gleich) nur eine offizielle 32-bit Version gibt.
  • AviSynth: Mit AviSynth ist es möglich, VirtualDub ein Videoformat vorzugaukeln, welches eigentlich aus einer Reihe von Bildern besteht. Sprich: Man legt dort fest, welche Bilder als Frames für ein Video verwendet werden sollen. Hier gibt man dementsprechend die genaue Anzahl der Fotos an. AviSynth wird einmalig installiert. Ich verwende Version 2.5.8 in der 32-bit Variante. Sie ist zwar von 2010, erfüllt ihren Zweck aber voll und ganz.
  • Xvid: Der von mir verwendete (freie) Codec um die Videodatei zu erzeugen. Xvid wird ebenfalls heruntergeladen und einmalig installiert.

Nun geht man wie folgt vor. Es wird eine Textdatei erstellt, die den Ordner der Bilder, deren Anzahl sowie die Framerate des Videos enthält. Man kann sie einfach unter “Zeitraffer.avs” abspeichern. Sie sieht wie folgt aus, wobei die ersten zwei Zeilen nur Kommentare als Erinnerung sind:

#Read files "00001.jpg" through "03606.jpg" with 25 fps
#ImageSource("%05d.jpg", 00001, 03606, 25)

a = ImageSource("\2015-01-09\%05d.jpg", 00001, 00243, 25)

return(a)

In diesem Beispiel lese ich also die Bilder von “00001” bis “00243” mit einer fps Rate von “25” ein. Wenn man mehrere Ordner hat, kann man weitere Variablen (b, c, whatever) anlegen und darauf im return Bereich verweisen: “return(a+b+whatever)”.

Nun startet man VirtualDub, öffnet diese avs-Datei, wählt die Kompression/Codec aus und speichert das Video ab. Diese Schritte erläutere ich wieder anhand von Screenshots, wobei ich ausdrücklich darauf hinweise, dass bei den Einstellungen vom Codec (Xvid) sehr viele Details hier nicht erklärt werden! Dazu muss man sich anderweitig informieren. (Wie zum Beispiel die Variante mit 1st und 2nd pass.)

In VirtualDub wird die Textdatei von AviSynth als "video file" geöffnet. Einfach auswählen. Compression festlegen unter Video -> Compression. Zum Beispiel das "Xvid HD 1080" Profil. WICHTIG: Da die Zeitraffer Videos sehr detailreich sind, schraube ich die Qualität immer SEHR hoch. Bei der Kürze der Videos spielt das fast keine Rolle. OPTIONAL kann man die Frame Rate des Videos ändern, um den Zeitraffer "schneller" zu machen. Dabei wird die Geschwindigkeit beispielsweise verdoppelt, in dem im Endeffekt nur jedes zweite Bild ausgewertet wird. Es sollten nur "Verschnellerungen" gemacht werden, da es sonst unnötig ruckelt. Auch empfehle ich ein ganzzahliges Vielfaches der Frame Rate zu verwenden, wie hier z.B. Faktor 2. Um das Video zu speichern wird "Save as AVI" ausgewählt. Nach der Angabe des Speicherorts geht es los. Diverse Fenster zeigen ein paar Details an, die man nach Beendigung einfach wegklicken kann. Mit am meisten freut mich ja immer, dass endlich mal alle CPU-Kerne gleichmäßig (!) ausgelastet werden. Schön parallelisiert. ;)

[Sound: Hat man eine passende Hintergrundmusik, kann man diese relativ einfach in VirtualDub über “Audio -> Audio from other file” einbinden. Wer sich mit Videoschnitt auskennt wird aber ohnehin das fertige Video in einem weiteren Videoschnittprogramm bearbeiten und dort mit Musik hinterlegen. Dies wäre dann allerdings nicht mehr kostenlos, worum es mir hier ja gezielt ging.]

!!!FERTIG!!! Herzlichen Glückwunsch! Es wurde nun eine fertige Videodatei erzeugt, die man mit den gängigen Progammen wie VLC einfach öffnen und angucken, oder natürlich irgendwo im Internet veröffentlichen kann.

Beispiel eines Zeitraffers

Als Beispiel habe ich oben ja bereits eines meiner Zeitraffervideos verlinkt. Es zeigt den Bereich vor unserem Balkon von Oktober bis Dezember, also von “noch alle grünen Blätter an den Bäumen” bis hin zu “es schneit”. Der komplette Aufbau (Kamera + Notebook) befand sich dabei auf dem Balkon. Da ich kostenmäßig ja nur wenig zu verlieren hatte, war mir das egal. Tatsächlich hatte das Notebook nach den ersten Temperaturen unter 0° Celsius dann auch die Grätsche gemacht.

Noch ein Hinweis zur Qualität: Leider ist das Video bei YouTube trotz “1080p HD” sehr schlecht. Das liegt vermutlich an der Kompression von YouTube, die nicht so detailreich ist wie es die feinen Baumstrukturen im Video erfordern würden. Ich habe das Video testweise auch bei Vimeo hochgeladen (Link), dort verhält es sich aber ähnlich. Die Kompressionsqualität hatte ich bei mir um ein vielfaches höher gewählt denn bei “normalen” Videos erforderlich. (Mein Video hat bei einer Länge von 1:08 Minuten immerhin eine Größe von knapp 200 MB.) Wenn ich das Video bei mir auf einem Full HD Fernseher angucke, dann ist es wirklich gestochen scharf, ähnlich wie die originalen Fotos.

Ansonsten habe ich bis jetzt folgende Aktionen gezeitraffert: Hochzeiten von Freunden und Bekannten (kommt sehr gut an, wenn man das gesamte Abendprogramm in 3 Minuten angucken kann), Sonnenauf- & Untergänge, Fußgängerzone (überall wo sich viele Menschen bewegen ist es interessant), LAN-Parties (jaja, als wir noch jung waren, …), Renovierungsarbeiten, Baukräne, etc. Mein längster Zeitraffer erstreckte sich über knapp ein Jahr und war die Dokumentation des Baus von einem Bürogebäude.

Verbesserungen

Jetzt möchte ich natürlich nicht verschweigen, dass ein Zeitraffer von einer festen Position aus auch langweilig sein kann. Folgende Ideen gibt es daher noch, wie man solche Zeitraffer noch deutlich lebendiger erscheinen lassen kann, die aber andererseits auch deutlich über “Low-Budget” liegen:

  • Spiegelreflex: Keine Frage, mit einer besseren Kamera steigt auch die Qualität der Zeitraffer. Und für ein paar kleine Clips würde ich auch mit meiner DSLR mal Fotos knipsen. Spätestens beim Erstellen von tagelangen Zeitraffern muss man sich aber überlegen, ob man die Spiegelreflex damit belasten möchte.
  • Slider: Mit einem Slider (oder in groß: Dolly), also einer Apperatur, auf der die Kamera während des Fotoschießens langsam in eine Richtung bewegt/gedreht wird, sehen Zeitraffer deutlich lebendiger aus. Leider kann man so ein Gerät schlecht mal eben selber bauen, schließlich muss die Kamera ruckelfrei auf einer Schiene laufen. (Hier ein Test eines solchen Sliders. Kaufen kann man so etwas hier oder hier.)
  • Virtuelle Kamerafahrten: Deutlich einfacher könnte man eine Kamerafahrt machen, in dem man den Ausschnitt des Zeitraffers im originalen (viel größeren) Foto konstant verändert. Man könnte beispielsweise von oben links nach unten rechts durchfahren. Dies wäre theoretisch per Software einfach möglich, solange sie so etwas beherrscht (kann IrfanView geskriptet werden?). Alternativ könnte man eine höhere Auflösung des Zeitraffers wählen und mit einem Videoschnittprogramm durchs Bild fahren. Ich habe mich bis jetzt aber noch nicht damit beschäftigt. Hier ein einfaches Beispiel für so etwas.
  • Fotografieren ohne Notebook: Für kürzere Sessions ist das Schießen der Fotos mit Notebook und Stromanschluss natürlich Käse. Deutlich einfacher geht es mit einer großen Speicherkarte und einem Fernauslöser, den man für gezeitete Fotos programmieren kann. Für alle möglichen Canon-Kameras (wie meine kleine PowerShot) kann man das Canon Hack Development Kit verwenden, welches per Skript den Foto auslösen kann. Sprich: Man kann mit einer einfachen Digitalkamera ohne weiteres Zubehör ebenfalls konstant Fotos machen.
  • Szenen: Meine Zeitraffer sind immer nur eine Aktion von einem Blickwinkel aus. Deutlich hübscher sind natürlich die Zeitraffer, bei denen eine Szene nur wenige Sekunden dauert und dafür diverse Sequenzen gedreht wurden.
  • Passende Hintergrundmusik: Also einfach irgendwelche Musik dahinter zu legen ist ja fast keine Kunst. Die Szenenwechsel mit der Musik zu synchronisieren, das wäre top. Würde aber bedeuten, dass man noch ein richtiges Videoschnittprogramm verwenden müsste, um die einzelnen Szenen detailliert zu schneiden.
  • 3D: Da ich ebenfalls die 3D-Fotografie als Hobby habe, liegt die Idee nahe, auch Zeitraffer in 3D zu erstellen. Man müsste halt zwei Kameras exakt parallel auslösen, was mit dem eben erwähnten CHDK oder dem SteraoData Maker möglich sein sollte. Bezüglich des Bearbeitens der Fotos und der Erstellens des Zeitraffers müsste man aber noch einiges nachdenken…
  • 4K/UHD: Full HD ist ja heute schon wieder out. Aber der gleiche Workflow sollte auch für größere Zeitraffer passen. 4K kann also kommen. Ach was rede ich? Ich meine natürlich mindestens 8K UHD! :) Lediglich am Codec müsste man drehen, vermutlich auf H.265?
  • Professionelle Software: Wer mehr möchte kann mehr bekommen. Zum Beispiel mit LRTimelapse, einer professionellen Software für Adobe Lightroom. Wer zum Beispiel seine Fotos gleichmäßig belichtet, oder virtuelle Kamerafahrten machen möchte, ist hier richtig. Kostet allerdings Geld und bedarf auch Lightroom (welches ich derzeit nicht nutze).

Das Fernziel eines Zeitraffers ist also definitiv etwas wie das hier (auch wenn es bei mir noch eine Weile dauern wird…):

Das war lang

Hand aufs Herz: Wer hat bis hier hin gelesen? Echt jetzt? Wow! Vielen Dank. Ich hoffe, es hat Spaß gemacht und zum Nachbauen angeregt.

Ich würde mich sehr über Kommentare von Leuten freuen, die meinen Workflow eventuell übernommen und verbessert haben. Immer her mit Kritik und Verbesserungsvorschlägen! 😉

So, und als kleinen Nachtrag habe ich just for fun aus vier Bildern der Fotosession von oben noch ein animiertes Gif erstellt:

Es wird Herbst

Ciao.

Yet another ownCloud Installation Guide

$
0
0
ownCloud2

If you want to use you own ownCloud installation, you can find several documentation on the Internet on how to set up this server, e.g. the official ownCloud documentation, or installation guides such as this or that or here. But none of these page alone provided enough information for installing a secure server completely from the beginning.

So here comes my step-by-step guide which surely won’t be complete, too. 😉 However, hopefully it will help other people while searching for their way to install ownCloud. Additionally I am showing how to upgrade an ownCloud server.

I am assuming that there is a fresh Ubuntu server installation in place (with a few other programs such as shown here), which has already static IP addresses and is accessible form the Internet. I am also assuming that there is a correct DNS name configured and that the SSL certificate for this DNS name is present.

(And note: Though I am trying to be really accurate about all commands, I am not showing every single key-stroke. If you have any problems on any step: 1) Google is your friend or 2) write a comment below this site.)

I am using the following components in this guide:

  • Ubuntu Server 14.04.2 LTS
  • ownCloud 8.0.4 (later updated to 8.1.0)

Basic Installation

The first step is to install all of the necessary components on the Ubuntu server. This can be done by adding the repository with the following steps. In my case, 64 packages were installed. (Note that I am additionally installing the php5-mysql package. I do not fully know why, but several other guides did so. ;)) During the process, the user must type in the SQL root password. Choose a strong one and keep it in mind!

sudo sh -c "echo 'deb http://download.opensuse.org/repositories/isv:/ownCloud:/community/xUbuntu_14.04/ /' >> /etc/apt/sources.list.d/owncloud.list"
wget http://download.opensuse.org/repositories/isv:ownCloud:community/xUbuntu_14.04/Release.key
sudo apt-key add - < Release.key  
sudo apt-get update
sudo apt-get install owncloud php5-mysql

The apache server throws the following error: “Could not reliably determine the server’s fully qualified domain name, using 127.0.1.1. Set the ‘ServerName’ directive globally to suppress this message”. This can be corrected by editing the apache configuration:

sudo nano /etc/apache2/apache2.conf

in which the server name must be added on a new line:

ServerName NAME-OF-THE-SERVER

 

SQL

The SQL database must be configured in the following way. Choose an own password for the ownCloud database user:

mysql -u root -p
CREATE USER 'ownclouduser'@'localhost' IDENTIFIED BY 'PASSWORD';
CREATE DATABASE ownclouddb;
GRANT ALL ON ownclouddb.* TO 'ownclouduser'@'localhost';
FLUSH PRIVILEGES;
exit

 

Virtual Host and HTTPS

The following steps enable SSL and create the appropriate virtual hosts for ownCloud.

At first, enable SSL and the headers module (later on used for HSTS):

sudo a2enmod ssl
sudo a2enmod headers

Then, add the virtual host (such as shown here with a static redirect to https). Note that I assume that there is a trusted SSL certificate already in place inside the /etc/ssl/certs/… folders. So, create a new configuration file for apache:

sudo nano /etc/apache2/sites-available/owncloud.conf

and add the following blocks in which “SUBDOMAIN.DOMAIN.TLD” must be set to your ownCloud DNS name:

<VirtualHost *:80>
    ServerName SUBDOMAIN.DOMAIN.TLD
    Redirect permanent / https://SUBDOMAIN.DOMAIN.TLD/
</VirtualHost>

<VirtualHost *:443>
    ServerName SUBDOMAIN.DOMAIN.TLD
    ServerAdmin webmaster@DOMAIN.TLD
    DocumentRoot "/var/www/owncloud"
    <Directory /var/www/owncloud>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride All
        Order allow,deny
        Allow from all
        # add any possibly required additional directives here
        # e.g. the Satisfy directive (see below for details):
        Satisfy Any
    </Directory>
    SSLEngine on
    SSLCertificateFile /etc/ssl/certs/cloud.crt
    SSLCertificateKeyFile /etc/ssl/private/cloud.key
    SSLCertificateChainFile /etc/ssl/certs/StartSSLconcatenated.crt
	Header always add Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"
    ErrorLog /var/log/apache2/SUBDOMAIN.DOMAIN.TLD-error_log
    CustomLog /var/log/apache2/SUBDOMAIN.DOMAIN.TLD-access_log common
</VirtualHost>

Finally, enable the new virtual host and reload the apache config:

sudo a2ensite owncloud.conf
sudo service apache2 reload

 

Additional, change the SSL cipher suite in order to use only secure protocols (e.g., graded with an A or A+ by SSL Labs). Open the ssl.conf file:

sudo nano /etc/apache2/mods-available/ssl.conf

and change the following two lines (according to here):

SSLCipherSuite HIGH:!kRSA:!kDHr:!kDHd:!kSRP:!aNULL:!3DES:!MD5
SSLProtocol all -SSLv3

and restart the server:

sudo service apache2 restart

 

Final Steps

Now, point your browser to the ownCloud installation:

https://SUBDOMAIN.DOMAIN.TLD

and finalize the installation. This means that at least the MySQL login (configured a few steps before) is needed in the appropriate fields:

ownclouduser
PASSWORD
ownclouddb
localhost

 

After these steps, the trusted domains must be set (if not set correctly already). Open the config.php:

sudo nano /var/www/owncloud/config/config.php

and verify the “trusted_domains” section:

array (
    0 => 'IP-ADDRESS-OF-THE-SERVER',
    1 => 'SUBDOMAIN.DOMAIN.TLD',
  ),

 

And the cron job for ownCloud should be used (see here). Create a new crontab with the www-data user:

crontab -u www-data -e

which has the following job:

*/15  *  *  *  * php -f /var/www/owncloud/cron.php

And in the admin section of the GUI webpage, set the Cron button to “Cron” (instead of Webcron or AJAX).

Filesize

Optional, change the maximum file size on your installation. “In order for the maximum upload size to be configurable, the .htaccess in the ownCloud folder needs to be made writable by the server”, read here. So, change the ownership of the htaccess file:

sudo chown www-data:www-data /var/www/owncloud/.htaccess

and set the “maximum upload size” in the admin GUI, e.g., to 512M or greater (16G or whatever). Even though that should fit, open the htaccess file and verify that the following three lines are present (I added the third line manually):

sudo nano /var/www/owncloud/.htaccess
php_value upload_max_filesize 4G
php_value post_max_size 4G
php_value memory_limit 4G

(I am not quite sure if a restart of apache is necessary here. However, I did it:)

sudo service apache2 restart

 

Update

I am always a bit afraid when updating web services via scripts or the like. But it is a must. So here we go. I updated my ownCloud installation from version 8.0.4 to 8.1.0. This is the documentation from ownCloud for that case. In theory, it is really simple:

sudo apt-get update
sudo apt-get dist-upgrade
cd /var/www/owncloud
sudo -u www-data php occ upgrade

Indeed, in my case (almost) everything succeeded. One thing I noticed was that the “contacts” app was disabled. And I was not able to update it through the GUI. Hm. However, after enabling it, the ownCloud server went into maintenance mode, but I was able to click the “Start Update” button in the GUI, which successfully updated the contacts app. Uff.

Furthermore (possibly due to the 8.1.0 update and not in general!), inside the admin section, the following warning appeared: “No memory cache has been configured.” In order to get a recent php5-apcu package (since the shipped package with Ubuntu 14.04 is outdated), the following steps are required:

wget http://mirrors.kernel.org/ubuntu/pool/universe/p/php-apcu/php5-apcu_4.0.6-1_amd64.deb
sudo dpkg -i php5-apcu_4.0.6-1_amd64.deb

To enable this module, the ownCloud config.php file must be edited:

sudo nano /var/www/owncloud/config/config.php

with the following new line inside the “CONFIG array”:

'memcache.local' => '\OC\Memcache\APCu',

But that was not enough. Another fatal error appeared: “Missing memcache class \OC\Memcache\APCu for local cache”. This could be solved with this two Google findings: Open the php.ini file inside the cli section:

sudo nano /etc/php5/cli/php.ini

and add the following line:

apc.enable_cli=1

Now it’s working. This was one more good example on how Google can save your life. 😉

DONE!!!

For any more hints or corrections, please write a comment.

Policy Routing on a FortiGate Firewall

$
0
0
FortiGate Policy Route featured image

This is a small example on how to configure policy routes (also known as policy-based forwarding or policy-based routing) on a Fortinet firewall, which is really simple at all. Only one single configuration page and you’re done. 😉

(Compared to my other PBR/PBF tutorials from Juniper ScreenOS and Palo Alto Networks, there is only one screenshot needed to explain the policy route. Ok, it is not that flexible, but easy.)

In my lab, I have a static default route to the wan1 interface. On the wan2 interface, there is a simple DSL connection to the Internet which shall be used for http/https traffic from the users. That is: Everything from the users IP segment (192.168.161.0/24) to the destination ports 80 and 443 shall be forwarded to this DSL connection. But an exemption is still needed: If the destination is on the internal LAN, the connection should not be policy routed. (Of course, appropriate policies must be in place, too.) The configuration is done under Router -> Static -> Policy Routes:

From the fg-trust2 network (192.168.161.0/24) to any on TCP port 80 should be forwarded to the wan2 connection. But anything to other inside (private) networks should NOT be forwarded. Overview of the three policies: Only TCP ports 80 and 443 are policy forwarded.

That’s it. In the Forward Traffic Log, it is easy to see which destination interface is used, dependent on the destination port:

Forward Traffic Log with Destination Interface.

Roundcube Installation Guide

$
0
0
Roundcube

Roundcube is an email webclient which is easy and intuitive to use. I am using it for my private mails, connecting via IMAP and SMTP to my hoster. One of the great advantages is the “flag” option which is synchronized via IMAP to my Apple devices.

Following is a step-by-step installation guide for Roundcube plus an update scenario. It is a kind of “memo for myself”, but hopefully, others can use it as well.

I wrote this guide as Roundcube 1.1.1 was the newest version. Later in this guide, I updated the installation to version 1.1.2.

The prerequisites for this are an Linux distribution (I am currently using Ubuntu 14.04 LTS 64-bit), a DNS domain name to the static IP address of the server and a valid SSL certificate. I furthermore assume that there is already an Apache server running and a MySQL database active. If not, start with the following that installs apache2, mysql and php5, and enables three specific apache modules. During this process, the SQL root password must be chosen:

sudo apt-get update
sudo apt-get install apache2 mysql-server php5 php-pear php5-mysql
sudo a2enmod ssl
sudo a2enmod headers
sudo a2enmod rewrite
sudo service apache2 restart

The error message “Could not reliably determine the server’s fully qualified domain name, using 127.0.1.1 for ServerName” can be corrected by setting the correct server name:

sudo nano /etc/apache2/apache2.conf
#add the following line:
ServerName NAME-OF-THE-SERVER

Roundcube Base

Make a folder inside the /var/www path, download the current (not the 1.1.1 version as I did for this record!) and “complete” roundcube version, unpack it, and change the ownership:

sudo mkdir /var/www/roundcube
cd ~
wget https://downloads.sourceforge.net/project/roundcubemail/roundcubemail/1.1.1/roundcubemail-1.1.1-complete.tar.gz
tar xvf roundcubemail-1.1.1-complete.tar.gz
sudo mv roundcubemail-1.1.1/* /var/www/roundcube/
sudo chown -R www-data:www-data /var/www/roundcube/*

In my case, the hidden “.htaccess” file was NOT copied, so I did it separately:

sudo cp roundcubemail-1.1.1/.htaccess /var/www/roundcube/
sudo chown www-data:www-data /var/www/roundcube/.htaccess

Database

Log into mysql (prompted for the root password), create a new database and user (change the THISISTHEPASSWORD to a new one of your own) and grant the privileges:

mysql -u root -p
CREATE DATABASE roundcube;
CREATE USER 'roundcubeuser'@'localhost' IDENTIFIED BY 'THISISTHEPASSWORD';
GRANT ALL PRIVILEGES ON roundcube.* TO 'roundcubeuser'@'localhost';
FLUSH PRIVILEGES;
exit

Followed by a copy of the initial.sql database that ships with the roundcube download. Note that the password must be specified DIRECTLY behing the -p option. No space in between!

mysql -u roundcubeuser -pTHISISTHEPASSWORD roundcube < /var/www/roundcube/SQL/mysql.initial.sql

Virtual Host

Create a new virtual host for the Apache HTTP server:

sudo nano /etc/apache2/sites-available/roundcube.conf

and paste in the following lines. Replace the “DOMAIN.TLD” with your domain. This actually creates two virtual hosts, one listening on the unencrypted http port 80 (only redirects to https), and the other one on https port 443.

<VirtualHost *:80>
    ServerName webmail.DOMAIN.TLD
    Redirect permanent / https://webmail.DOMAIN.TLD/
</VirtualHost>

<VirtualHost *:443>
    ServerName webmail.DOMAIN.TLD
    ServerAdmin webmaster@DOMAIN.TLD
    DocumentRoot "/var/www/roundcube"
    <Directory "/var/www/roundcube">
        AllowOverride All
    </Directory>
    SSLEngine on
    SSLCertificateFile /etc/ssl/certs/webmail.DOMAIN.TLD.crt
    SSLCertificateKeyFile /etc/ssl/private/webmail.DOMAIN.TLD.key
	Header always add Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"
    ErrorLog /var/log/apache2/webmail.DOMAIN.TLD-error_log
    CustomLog /var/log/apache2/webmail.DOMAIN.TLD-access_log common
</VirtualHost>

To change the SSL cipher suites of apache to better ones, refer to this tutorial.

Enable the new site and reload the apache service:

sudo a2ensite roundcube
sudo service apache2 reload

Finalize via Web GUI

Now, access the following URL in order to finalize the installation:

https://webmail.DOMAIN.TLD/installer/

One thing was noted as an error: “date.timezone:  NOT OK(not set)”. This can be corrected with the following new line inside the php.ini file:

sudo nano /etc/php5/apache2/php.ini
#add the following line, correspondent to your timezone:
date.timezone = "Europe/Berlin"
#and reload the server:
sudo service apache2 reload

In the “Database setup” part, enter the following four values:

localhost
roundcube
roundcubeuser
THISISTHEPASSWORD

This should fit for now. You can do some tests on the final site, then it’s finished.

Finally, as recommended, delete the “installer” folder:

sudo rm -r /var/www/roundcube/installer/

File Size

One thing to adjust is the maximum file size for file attachements (see here). Open the htaccess file and adjust the following two lines:

sudo nano /var/www/roundcube/.htaccess
php_value   upload_max_filesize   30M
php_value   post_max_size         30M

Done. 😉

Update

This is a demo case in which I updated from Roundcube 1.1.1 to 1.1.2. See the changelog  and official howto-upgrade pages from roundcube.

At first, backup the roundcube folder:

sudo cp -r /var/www/roundcube/ ~/roundcube-backup-DATE-OF-TODAY/

and export the whole SQL database, e.g., via phpMyAdmin (not explained here).

Download the new package, extract it, and run the script. Worked without any errors. Great!

wget https://downloads.sourceforge.net/project/roundcubemail/roundcubemail/1.1.2/roundcubemail-1.1.2-complete.tar.gz
tar xf roundcubemail-1.1.2-complete.tar.gz
cd roundcubemail-1.1.2/
sudo bin/installto.sh /var/www/roundcube/

 

Policy-Based Routing on ScreenOS with different Virtual Routers

$
0
0
ScreenOS PBF with VRs featured image

I already puslished a blog post concerning policy-based routing on a Juniper firewall within the same virtual router (VR). For some reasons, I was not able to configure PBR correctly when using multiple VRs. Now it works. 😉 So, here are the required steps:

(This document from Juniper explains it all. I just made a few screenshots.)

My lab looks like that:

Juniper ScreenOS PBR with different VRs

The steps are quite similar to the PBR guide without multiple VRs. However, the “action group” must be set WITHOUT a “next-interface”. This CANNOT be done through the WebGUI, because it ALWAYS sets the “next-interface”, even if the checkmark is not checked! You MUST use the CLI such as this:

set vrouter trust-vr action-group name Action-Surf-DSL
set vrouter trust-vr action-group Action-Surf-DSL next-hop 10.49.254.1 action-entry 1

After this, the WebGUI correctly shows this entry without the next-interface. There is a “Note” on the Juniper page that exactly describes this for early ScreenOS 5.3/5.4 versions. However, in my current 6.3.0.r19 version, this is still the case. Furthermore, instead of the self-referenced host route I am using a default route (0.0.0.0/0) to the next-hop value, since it actually is my default router. With this default route, the PBR on the trust-vr works even without this so called self-referenced host route inside the untrust-vr. (Of course, there might be scenarios in which the next-hop value is not the default router. In these situations, you must configure this self-referenced host route.)

Here we go. Refer to the descriptions under the screenshots for more details:

Extended ACL for defining the policy-routed traffic. Match Group, simply referencing the ACL. Action Group which was configured through the CLI!! Policy, referencing the match group and action group. Policy binded to the DMZ interface. Host route inside the trust-vr to the "next-hop" via the untrust-vr. In my case: The default route inside the untrust-vr to the "next-hop" value. I am doing source NAT on all outgoing connections to the Untrust2 zone. Policy Traffic Log shows the translated source addresses.

The complete CLI commands (without policies/NAT) are the following:

set vrouter "untrust-vr"
set route 0.0.0.0/0 interface ethernet0/3 gateway 10.49.254.1 permanent
exit

set vrouter "trust-vr"
set access-list extended 1 src-ip 192.168.110.0/24 dst-ip 0.0.0.0/0 dst-port 80-80 protocol tcp entry 1
set access-list extended 1 src-ip 192.168.110.0/24 dst-ip 0.0.0.0/0 dst-port 443-443 protocol tcp entry 2
set access-list extended 1 src-ip 192.168.110.0/24 dst-port 30001-30005 protocol tcp entry 3
set access-list extended 1 src-ip 192.168.110.0/24 dst-port 33002-33002 protocol tcp entry 4
set match-group name Match-Surf-DSL
set match-group Match-Surf-DSL ext-acl 1 match-entry 1
set action-group name Action-Surf-DSL
set action-group Action-Surf-DSL next-hop 10.49.254.1 action-entry 1
set pbr policy name Policy-Surf-DSL
set pbr policy Policy-Surf-DSL match-group Match-Surf-DSL action-group Action-Surf-DSL 1
exit

set interface ethernet0/5.10 pbr Policy-Surf-DSL

 

FIN.


Policy Based Forwarding on a Palo Alto with different Virtual Routers

$
0
0
Palo Alto PBF w different VRs featured image

This guide is a little bit different to my other Policy Based Forwarding blog post because it uses different virtual routers for both ISP connections. This is quite common to have a distinct default route for both providers. So, in order to route certain traffic, e.g., http/https, to another ISP connection, policy based forwarding is used.

There are two documents from Palo Alto that give advises how to configure PBF.

I am using a PA-200 with PAN-OS 7.0.1. My lab is the following:

Palo Alto PBF with different VRs

(Note that, unlike Juniper ScreenOS, a zone is not tied to a virtual router. You actually can merge interfaces on different vrouters into the same zone. However, I prefer to configure an extra zone for each ISP to keep my security policies clearly separated.)

These are the configuration steps. See the descriptions under the screenshots for details:

Two virtual routers: default and untrust. The policy based forwarding configuration: Do not PBF private networks, but http/https to ethernet1/2. The "Forwarding" tab in detail. I am doing a source NAT for these connections. Of course, a security policy is needed, too. And a static route inside the untrust virtual router back to the default virtual router. This routes the client subnet back. The traffic log shows a few connections on ports 80/443 that egressed on interface 1/2 and were NATed.

Done.

Juniper ScreenOS: DHCPv6 Prefix Delegation

$
0
0
Juniper DHCPv6-PD featured image

The Juniper ScreenOS firewall is one of the seldom firewalls that implements DHCPv6 Prefix Delegation (DHCPv6-PD). It therefore fits for testing my dual stack ISP connection from Deutsche Telekom, Germany. (Refer to this post for details about this dual stack procedure.)

It was *really* hard to get the correct configuration in place. I was not able to do this by myself at all. Also Google did not help that much. Finally, I opened a case by Juniper to help me finding the configuration error. After four weeks of the opened case, I was told which command was wrong. Now it’s working. 😉 Here we go.

Note that I will not explain how DHCPv6 prefix delegation works at all. I will only go into details on how to configure it on a Juniper ScreenOS SSG firewall. My Google results for this case brought me to this and that page. But none of them correctly revealed the working configuration commands.

The basic idea is to receive a /56 IPv6 prefix from the ISP and to hand out /64 subnets/prefixes to the client networks.

Configuration

This picture shows the main parts on how the SSG should be configured:

Juniper DHCPv6-PD

This involves the following steps:

  1. Enable IPv6 on upstream interface (mode “Host”, accept router advertisement).
  2. Enable IPv6 on client interfaces (mode “Router”, send router advertisements).
  3. Configure DHCPv6 server on client interfaces (for delivering DNS entries).
  4. Configure DHCPv6 client on upstream interface (to receive and delegated prefix).

These are the configuration steps in the GUI. Read the descriptions under the screenshots for more information:

On the upstream Interface (eth0/0), IPv6 must be enabled in Host mode. And this interface must accept incoming router advertisements. This is just for my ISP: A PPPoE profile with IPv6CP. On the client interface, IPv6 must be enabled as router mode. No IPv6 address must be filled in. But the checkmarks for sending RAs and the O-flag must be set. Edit the DHCPv6 setting on the client interfaces ... ... to present the DNS servers (stateless DHCPv6). Edit the DHCPv6 settings for the upstream interface ... ... to receive everything, but especially the delegated prefix! Click on the highlighted section ... ... to add a prefix distribution. On left-hand side NOTHING MUST be added. Only the prefix delegation with appropriate values. Overview of my two configured client interfaces with different subnet IDs.

One special note on the prefix distribution settings: There are two field called “SLA” and “SLA length”. It took me a while to catch what this means:

  • SLA: This is the subnet ID in decimal notation (WTF?). For example, if you want to use the IPv6 subnet “42”, you must convert this value to decimal, which is “66”.
  • SLA length: This is the length of the subnet ID. In my case, since I am getting a /56 but want to hand out /64 prefixes, its 8 bit in length.

The following listing presents all relevant CLI commands for the just configured DHCPv6-PD scenario (especially lines 30-32):

set interface "ethernet0/0" ipv6 mode "host"
set interface "ethernet0/0" ipv6 enable
set interface ethernet0/0 route
set interface "wireless0/2" ipv6 mode "router"
set interface "wireless0/2" ipv6 enable
set interface wireless0/2 route
set interface "bgroup1" ipv6 mode "router"
set interface "bgroup1" ipv6 enable
set interface bgroup1 route
set interface ethernet0/0 ipv6 ra accept
set interface wireless0/2 ipv6 ra link-address
set interface wireless0/2 ipv6 ra transmit
set interface bgroup1 ipv6 ra link-address
set interface bgroup1 ipv6 ra transmit
set interface ethernet0/0 ipv6 nd nud
set interface wireless0/2 ipv6 nd nud
set interface bgroup1 ipv6 nd nud
set interface wireless0/2 dhcp6 server
set interface wireless0/2 dhcp6 server options dns dns1 2003:180:2:8000:0:1:0:53
set interface wireless0/2 dhcp6 server options dns dns2 2003:180:2:8100:0:1:0:53
set interface wireless0/2 dhcp6 server enable
set interface bgroup1 dhcp6 server
set interface bgroup1 dhcp6 server options dns dns1 2003:180:2:8000:0:1:0:53
set interface bgroup1 dhcp6 server options dns dns2 2003:180:2:8100:0:1:0:53
set interface bgroup1 dhcp6 server enable
set interface ethernet0/0 dhcp6 client
set interface ethernet0/0 dhcp6 client options rapid-commit
set interface ethernet0/0 dhcp6 client options request dns
set interface ethernet0/0 dhcp6 client options request search-list
set interface ethernet0/0 dhcp6 client options request pd
set interface ethernet0/0 dhcp6 client pd ra-interface bgroup1
set interface ethernet0/0 dhcp6 client pd ra-interface wireless0/2 sla-id 66 sla-len 8
set interface ethernet0/0 dhcp6 client enable

 

Monitoring

This is how the GUI looks like after a received and delegated prefix:

Interfaces: transfer segment with a /64 via RA, and the two client subnets with delegated prefixes. All interface IDs are set automatically according to EUI-64 addresses. The prefix to be advertised via RA is set automatically. Note the different subnet ID (here: 42) inside my two different client interfaces. The learned /56 prefix from my ISP (Deutsche Telekom). The complete IPv6 routing table, one more time with the two different subnets.

I tested the two configured subnets with my mobile devices, one in the bgroup1 network, while the other one in the wireless0/2 network. (Called my http://ip.webernetz.net script that shows the IP, refer to here.)

My iPhone that was inside the bgroup1 interface. And an Android phone on wireless0/2.

And, of course, the SSG can list many details of the learned/delegated prefixes via the CLI:

fd-we-fw01-> get interface ethernet0/0 dhcp6 client pd
DHCPv6 on interface ethernet0/0:        Interface config      : -
--------------------------------------------------------------------------------
IAPD-ID: 0, type: PD
        Prefix distribution list:
                ra interface: bgroup1   sla id: 0       sla len: 0
                ra interface: wireless0/2       sla id: 66      sla len: 8
        suggested prefix data:
                                -IPv6 Prefix: ::/0
                                 Valid Life Time                 : 00h00m00s
                                 Preferred Life Time             : 00h00m00s
        Delegated Prefix Information:
        t1: 900 t2: 1440
        state: 0
        server: 00:03:00:01:44:2b:03:19:03:00
        Delegated-Prefix list:
                prefix: 2003:50:aa10:3300::/56
        Prefix distribution list:
                ra interface: bgroup1   sla id: 0       sla len: 0
                ra interface: wireless0/2       sla id: 66      sla len: 8
fd-we-fw01->
fd-we-fw01->
fd-we-fw01-> get interface bgroup1 ipv6 ra
Router advertisement configuration info for interface bgroup1
--------------------------------------------------------------------------------
        transmit           :on
        accept             :off
        hop-limit          :64
        default-life-time  :1800
        retransmit-time    :off
        reachable-time     :off
        link-mtu           :off
        link-address       :on
        other              :off
        managed            :off
        min-adv-int        :200
        max-adv-int        :600
        next-send-time     :448
Prefix list on interface bgroup1 to be advertised via RA
Adv Prefix Flags (PF): O On Link, A Autonomous
State (St): O On Link, D Detached
--------------------------------------------------------------------------------
IPv6 Prefix:2003:50:aa10:3300::                      Len:64  PF:OA St:O
Valid Life Time                 :30d00h00m
Preferred Life Time             :07d00h00m
--------------------------------------------------------------------------------
fd-we-fw01->
fd-we-fw01->
fd-we-fw01-> get interface wireless0/2 ipv6 ra
Router advertisement configuration info for interface wireless0/2
--------------------------------------------------------------------------------
        transmit           :on
        accept             :off
        hop-limit          :64
        default-life-time  :1800
        retransmit-time    :off
        reachable-time     :off
        link-mtu           :off
        link-address       :on
        other              :off
        managed            :off
        min-adv-int        :200
        max-adv-int        :600
        next-send-time     :155
Prefix list on interface wireless0/2 to be advertised via RA
Adv Prefix Flags (PF): O On Link, A Autonomous
State (St): O On Link, D Detached
--------------------------------------------------------------------------------
IPv6 Prefix:2003:50:aa10:3342::                      Len:64  PF:OA St:O
Valid Life Time                 :30d00h00m
Preferred Life Time             :07d00h00m
--------------------------------------------------------------------------------
fd-we-fw01->
fd-we-fw01->
fd-we-fw01-> get route v6


IPv6 Dest-Routes for <untrust-vr><span></span> (0 entries)
--------------------------------------------------------------------------------------
H: Host C: Connected S: Static A: Auto-Exported
I: Imported R: RIP/RIPng P: Permanent D: Auto-Discovered
N: NHRP
iB: IBGP eB: EBGP O: OSPF/OSPFv3 E1: OSPF external type 1
E2: OSPF/OSPFv3 external type 2 trailing B: backup route


IPv6 Dest-Routes for <trust-vr><span></span> (7 entries)
--------------------------------------------------------------------------------------
         ID                                   IP-Prefix       Interface
                                                Gateway   P Pref    Mtr     Vsys
--------------------------------------------------------------------------------------
*         1                                        ::/0          eth0/0
                                fe80::462b:3ff:fe19:300   D  252      1     Root
*         2                      2003:50:aa7f:9033::/64          eth0/0
                                                     ::   C    0      0     Root
*         3   2003:50:aa7f:9033:b2c6:9aff:fefd:ca80/128          eth0/0
                                                     ::   H    0      0     Root
*         5   2003:50:aa10:3300:b2c6:9aff:fefd:ca8c/128         bgroup1
                                                     ::   H    0      0     Root
*         6                      2003:50:aa10:3342::/64     wireless0/2
                                                     ::   C    0      0     Root
*         7   2003:50:aa10:3342:b2c6:9aff:fefd:ca97/128     wireless0/2
                                                     ::   H    0      0     Root
*         4                      2003:50:aa10:3300::/64         bgroup1
                                                     ::   C    0      0     Root

fd-we-fw01->
fd-we-fw01->

 

Any questions? 😉

FortiGate 2-Factor Authentication via SMS

$
0
0
FortiGate SMS featured image

Two-factor authentication is quite common these days. That’s good. Many service providers offer a second authentication before entering their systems. Beside hardware tokens or code generator apps, the traditional SMS on a mobile phone can be used for the second factor.

The FortiGate firewalls from Fortinet have the SMS option built-in. No feature license is required for that. Great. The only thing needed is an email-to-SMS provider for sending the text messages. The configuration process on the FortiGate is quite simple, however, both the GUI as well as the CLI are needed for that job. (Oh Fortinet, why aren’t you improving your GUI?)

Here is a step-by-step configuration tutorial for the two-factor authentication via SMS from a FortiGate firewall. My test case was the web-based SSL VPN portal.

The second factor is sent via SMS. More precisely: via email2sms. That is: The FortiGate sends an email to <phone-number>@email2sms-provider.tld with the authentication code. In order to use this feature, an email server as well as an SMS service must be configured. I am not using the “FortiGuard Messaging Service” for this test but a “Custom” Email-2-SMS service from the Internet (just found via Google).

I am using a FortiWiFi 90D with FortiOS 5.2.4, build688.

Email Service

The SMTP server should be configured anyway in order to receive alert emails from the FortiGate. If it is not configured yet, it is done under System -> Config -> Advanced -> Email Service:

FortiGate SMS 01 Email Service

SMS Service

The SMS service settings are directly below the email service. Only a name and the “Domain” must be entered. This was a bit confusing for me as I saw it the first time since no other options can be set. But in fact, the FortiGate will send all SMS to <number@domain>. So it really does not need any more information. The correct domain for the mail2sms gateway is listed on the service you chose on the Internet. (I am using websms.com, a German provider.)

FortiGate SMS 02 SMS Service

User

The most annoying point is to activate the two-factor SMS authentication for the user since it cannot be done through the GUI. Furthermore, if you add users, the GUI from FortiGate is not consistent in storing the phone number for local users. (As with almost all cases, the GUI from Fortinet is not that good.) So take care!

The phone number can be entered via the GUI, as well as the “Custom” SMS provider, but the only option for the “Enable Two-factor Authentication” is the Token, which we won’t use here:

FortiGate SMS 03 User Phone Number FortiGate SMS 04 No SMS Option

Use the CLI in order to configure the following command for each user (line 3):

fd-wv-fw04 # config user local
fd-wv-fw04 (local) # edit weberjoh2
fd-wv-fw04 (weberjoh2) # set two-factor sms
fd-wv-fw04 (weberjoh2) # next

After that, the two factor auth method “sms” is shown in the summary as well as under the users details:

FortiGate SMS 05 sms after enabled via CLI FortiGate SMS 06 sms after enabled via CLI

That’s all for the config.

Test

My use case for the two-factor authentication is the web-based SSL VPN. Following are the screenshots I’ve made during the logon process, as well as the log events:

FortiGate SMS 07 Login first factor FortiGate SMS 08a iPhone SMS received FortiGate SMS 08b Login second SMS factor FortiGate SMS 09 Successfully logged in FortiGate SMS 10 SSL-VPN Monitor FortiGate SMS 11 Event Log System FortiGate SMS 12 Event Log VPN

The corresponding log messages on the CLI look like this:

23: date=2015-12-03 time=17:23:16 logid=0100038411 type=event subtype=system level=notice vd="root" logdesc="Two-factor authentication code sent" user="weberjoh2" action="send authentication code" msg="Send two-factor authentication token code 047548 to 004********211@email2sms.websms.com"

24: date=2015-12-03 time=17:23:16 logid=0101039943 type=event subtype=vpn level=information vd="root" logdesc="SSL VPN new connection" action="ssl-new-con" tunneltype="ssl" tunnelid=0 remip=87.159.185.106 tunnelip=(null) user="N/A" group="N/A" dst_host="N/A" reason="N/A" msg="SSL new connection"

I like it. Easy to use, even for non-technical persons. 😉

Links

Tufin SecureTrack: Adding Devices

$
0
0
Tufin SecureTrack - Adding Devices featured image

Since a few weeks I am using Tufin SecureTrack in my lab. A product which analyzes firewall policies about their usage and their changes by administrators (and much more). Therefore, the first step is to connect the firewalls to SecureTrack in two directions: SSH from SecureTrack to the device to analyze the configuration, as well as Syslog from the device to SecureTrack to real-time monitor the policy usage.

This blog post shows the adding of the following firewalls into Tufin: Cisco ASA, Fortinet FortiGate, Juniper ScreenOS, and Palo Alto PA.

I am running TufinOS 2.10 on a virtual machine. The Tufin Orchestration Suite (SecureTrack, etc.) is version R15-3.

Pre Note: No IPv6

Though the Tufin appliance can be configured with an IPv6 address, it is not able to communicate with firewalls via IPv6. All connections must traverse via the legacy Internet Protocol. I asked the Tufin support about that, which replied with: “It is not part of the current IPv6 plans, nor any road-map we are aware of.” Oh oh. At least IPv6 network objects can be analyzed, which is the main part of using SecureTrack. For the other features, I mailed a few feature requests to Tufin.

Start monitoring a new device

The configuration steps to add a new device are always the same. Under Settings -> Monitoring -> Manage Devices, select the device type under “Start monitoring a new device” and continue. Give the device a name and set the IP address to which Tufin should connect to. Since I am running OSPF as well as OSPFv3 between all of my firewalls, I am always enabling the “Collect dynamic topology information” feature. Finally, enter the login credentials for connecting via ssh to the firewall. I am always creating a new read-only user for Tufin. The “Monitoring Settings” configuration can be left as default.

The second step is to send syslog messages from each device to Tufin. This is solely done at the firewalls. Of course, all intermediate routers/firewalls must allow the traffic for ssh and syslog between Tufin and the monitored devices.

Cisco ASA

These are the steps for connecting to a Cisco ASA firewall via ssh and syslog. (ASA 5505, 9.2(4)).

Tufin add Cisco ASA (1) Tufin add Cisco ASA (2) Tufin add Cisco ASA (3) Tufin add Cisco ASA (4) Tufin add Cisco ASA (5) Tufin add Cisco ASA (6) Tufin add Cisco ASA (7) Currently logged in administrators. Tufin add Cisco ASA (9) Tufin add Cisco ASA (10) Tufin add Cisco ASA (11) Tufin add Cisco ASA (12) Tufin add Cisco ASA (13)

Fortinet FortiGate

The ssh connection for a FortiGate is configured through the GUI. (FortiWiFi 90D, v5.2.4, build688).

Tufin add FortiGate (1) I created a new admin profile called "read-only". Tufin add FortiGate (3) Tufin add FortiGate (4) Tufin add FortiGate (5) Tufin add FortiGate (6) Tufin add FortiGate (7) Admin logins.

Since I am already using a syslog-ng server, and since only one syslog server is configurable through the FortiGate GUI (oh Fortinet, why aren’t you improving your GUI?), this must be done via the CLI:

config log syslogd2 setting
    set status enable
    set server "192.168.120.19"
end

 

Juniper ScreenOS

The SSG firewalls are listed as “Juniper NetScreen” within Tufin. These are the steps. (SSG 5, 6.3.0r20.0).

Tufin add Juniper NetScreen (1) Tufin add Juniper NetScreen (2) Tufin add Juniper NetScreen (3) Tufin add Juniper NetScreen (4) Tufin add Juniper NetScreen (5) Tufin add Juniper NetScreen (6) Tufin add Juniper NetScreen (7) Current Login Sessions. Tufin add Juniper NetScreen (9) Tufin add Juniper NetScreen (10)

Palo Alto PA

Finally, the Palo Alto. Note that every security policy rule needs the log forwarding profile attached. Furthermore, the “Config” log messages can be sent to Tufin, too. (PA-200, PAN-OS 7.0.3).

Tufin add Palo Alto (1) Tufin add Palo Alto (2) Tufin add Palo Alto (3) Tufin add Palo Alto (4) Tufin add Palo Alto (5) Tufin add Palo Alto (6) Tufin add Palo Alto (7) Tufin add Palo Alto (8) Tufin add Palo Alto (9) Tufin add Palo Alto (10) Log Forwarding Profile in EVERY security policy rule. See the icons on all lines. Tufin add Palo Alto (13)

Verifying Syslog

If you want to have the rule and object usage analysis, it is crucial that Tufin receives syslog messages. But after adding a new monitored device, the appropriate icon turns green even though no syslog messages are received yet. Only after some time it will get yellow to warn that “Usage data is not being saved”, if there is no receiving of syslog messages.

If you want to verify that syslog messages are received by Tufin, use tcpdump from the CLI:

[root@jw-tufin01 ~]# tcpdump -i eth0 -vv -w /tmp/syslog.log  -s 1500 src 192.168.86.1 and udp dst port 514
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 1500 bytes
Got 662

Note that it is not relevant that the syslog messages come in from the same source IP address as the device is connected. Under certain circumstances, this can be the case. (E.g., I am connecting to my Juniper firewall via a different vrouter interface than the syslog messages are generated.) Tufin matches the received syslog messages to the correct device.

After some hours/days/weeks of information beeing processed by Tufin SecureTrack, you can analyse the configuration or run certain rule and object policy usage reports.

ntopng Installation

$
0
0
ntopng Installation featured image

Some time ago I published a post introducing ntopng as an out-of-the-box network monitoring tool. I am running it on a Knoppix live Linux notebook with two network cards. However, I have a few customers that wanted a persistent installation of ntopng in their  environment. So this is a step-by-step tutorial on how to install ntopng on a Ubuntu server with at least two NICs.

I already pointed to the many great features of ntopng in the previous post. If you are searching for an open source real-time network analyzer, ntopng is the choice.

Network Setup

This is a rough view of the network. On a switch in the network, a monitor port is configured to send all traffic from a certain port/vlan/routing-domain to the network analyzer. (There are different names for this scenario: mirror and monitor ports, SPAN ports, source and destination ports, etc.) The eth1 port on the Linux machine is used in promiscuous mode to process everything that comes in.

The other port, eth0, must be configured with a static IP address on the network. Through this port, the ntopng GUI (IP-address with default port 3000) appears.

ntopng Installation

Plan the place and bandwidth of the mirroring carefully! Before or after a firewall/router with NAT? Does the overall bandwidth exceed the physical link of the monitor port?

Installation of ntopng

I am using a fresh Ubuntu Server 14.04 LTS edition (64-bit <- which is required for ntopng). As always I am installing a few basic software packages before starting with the actual service. The packages for ntopng can be found here. Select either the “nightly” or “stable” builds. For more reliable versions, you should choose the stable one. Execute the following two commands on the server to add the repository of ntopng:

wget http://apt-stable.ntop.org/14.04/all/apt-ntop-stable.deb
sudo dpkg -i apt-ntop-stable.deb

Have a look at “/etc/apt/sources.list.d/”. There is now a “ntop-stable.list” file which has two lines. Now you can install ntopng with:

sudo apt-get update
sudo apt-get install ntopng

This will install a bunch of packages, incuding ntopng, ntopng-data, pfring, redis-server, redis-tools.

Before you can start ntopng, you need to create a configuration file:

sudo nano /etc/ntopng/ntopng.conf
  . Read the documentation (
man ntopng
 ) for more details. The following template can be used as a starting point:
--pid-path=/var/tmp/ntopng.pid
--daemon
--interface=eth1
--http-port=3000
--local-networks="10.0.0.0/8,192.168.0.0/16,2001:db8::/48"
--dns-mode=1
--data-dir=/var/tmp/ntopng
--disable-autologout
--community

Furthermore, you need a file called “ntopng.start”, which can be empty but must exist in the folder:

sudo touch /etc/ntopng/ntopng.start

Now you can start ntopng with:

sudo service ntopng start

It will also be started automatically after a reboot.

Promiscuous Interfaces

What’s still missing is the configuration of the eth1 interface to be in promisc mode. Furthermore, it should not get an IPv4 or IPv6 via DHCPv4 or SLAAC. Therefore, the following configuration steps are required.

Disable IPv6 on the interface: Open the following file:

sudo nano /etc/sysctl.conf

and add the following line:

net.ipv6.conf.eth1.disable_ipv6=1

 

Start the eth1 interface in promiscuous mode: Open the following file:

sudo nano /etc/network/interfaces

and add these lines:

auto eth1
iface eth1 inet manual
        up ifconfig eth1 promisc up
        down ifconfig eth1 promisc down

Note: If there are already some lines that reference to eth1, delete them or comment them out. For example, there should be no “iface eth1 inet dhcp” line anymore!

 

Now, after each reboot of the server, the eth1 interface card will be in promiscuous mode and ntopng will be started automatically.

To verify that ntopng is running, have a look at netstat, which should display the running process and the open TCP port 3000:

weberjoh@jw-nb10:/etc/ntopng$ sudo netstat -l -p -n
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:6379          0.0.0.0:*               LISTEN      1280/redis-server 1
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1110/sshd
tcp6       0      0 :::22                   :::*                    LISTEN      1110/sshd
tcp6       0      0 :::3000                 :::*                    LISTEN      8543/ntopng
udp        0      0 192.168.120.10:123      0.0.0.0:*                           1729/ntpd
udp        0      0 127.0.0.1:123           0.0.0.0:*                           1729/ntpd
udp        0      0 0.0.0.0:123             0.0.0.0:*                           1729/ntpd
udp        0      0 0.0.0.0:161             0.0.0.0:*                           1307/snmpd
udp        0      0 0.0.0.0:58820           0.0.0.0:*                           1307/snmpd
udp        0      0 0.0.0.0:514             0.0.0.0:*                           1236/syslog-ng
udp6       0      0 2003:51:6012:120::1:123 :::*                                1729/ntpd
udp6       0      0 fe80::21d:92ff:fe53:123 :::*                                1729/ntpd
udp6       0      0 ::1:123                 :::*                                1729/ntpd
udp6       0      0 :::123                  :::*                                1729/ntpd
udp6       0      0 ::1:161                 :::*                                1307/snmpd

 

Viewing all 63 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>