Canonical Voices

Posts tagged with 'security'

jdstrand

Mirror, mirror…

Recently I wanted to mirror traffic from one interface to another. There is a cool utility from Martin Roesch (of Snort fame) called daemonlogger. It’s use is pretty simple (note I created the ‘daemonlogger user and group on my Ubuntu system since I wanted it to drop privileges after starting):
$ sudo /usr/bin/daemonlogger -u daemonlogger \
  -g daemonlogger -i eth0 -o eth1

Sure enough, examing tcpcump output, anything coming in on eth0 was mirrored to eth1. I then used a crossover cable and used tcpdump on the other system and it all worked as advertised. This could be very useful for an IDS.

Then I thought how cool it would be to do this for multiple interfaces, but then I quickly realized that would be confusing for whatever was trying to interpret that traffic. I read that someone used a combination of daemonlogger and vtun for a remote snort monitor to work in EC2, which got me playing….

In my test environment, I again used a crossover cable to connect two systems (let’s call them the ‘server’ and the ‘monitor’). The server system runs vtund. The monitor system then connects as a vtun client. When this happens, tap devices are created on the server and the monitor. The vtund server is setup to start daemonlogger on connect to mirror the traffic from eth0 to the tap0 device. The monitor system can then listen on the tap device and get all the traffic. I then created a second vtun tunnel and mirrored from eth1 to tap1. The monitor could monitor on both tap devices. Neat!

To do accomplish this, I used this /etc/vtund.conf on the server:options {
 type stand;
 port 5000;
 syslog daemon;
 ifconfig /sbin/ifconfig;
}
default {
  type ether;
  proto udp;
  compress no;
  encrypt no;
  multi yes;
  keepalive yes;
  srcaddr {
   iface eth2;
   addr 10.0.3.1;
  };
}
internal {
  password pass0;
  device tap0;
  up {
   ifconfig "%d up";
   program /usr/bin/daemonlogger "-u daemonlogger -g daemonlogger -i eth0 -o %d";
  };
  down {
   ifconfig "%d down";
   program /usr/bin/pkill "-f -x '/usr/bin/daemonlogger -u daemonlogger -g daemonlogger -i eth0 -o %d'";
  };
}
external {
  password pass1;
  device tap1;
  up {
   ifconfig "%d up";
   program /usr/bin/daemonlogger "-u daemonlogger -g daemonlogger -i eth1 -o %d";
  };
  down {
   ifconfig "%d down";
   program /usr/bin/pkill "-f -x '/usr/bin/daemonlogger -u daemonlogger -g daemonlogger -i eth1 -o %d'";
  };
}

Ther server’s /etc/default/vtun has:RUN_SERVER=yes
SERVER_ARGS="-P 5000"

And here is the client /etc/vtund.conf:options {
  port 5000;
  ifconfig /sbin/ifconfig;
}
default {
  type ether;
  proto udp;
  compress no;
  encrypt no;
  persist yes;
}
internal {
  device tap0;
  password pass0;
  up {
   ifconfig "%d up";
  };
  down {
   ifconfig "%d down";
  };
}
external {
  device tap1;
  password pass1;
  up {
   ifconfig "%d up";
  };
  down {
   ifconfig "%d down";
  };
}

The monitor system’s /etc/default/vtun looks like this:CLIENT0_NAME=internal
CLIENT0_HOST=10.0.2.1
CLIENT1_NAME=external
CLIENT1_HOST=10.0.2.1

With the above, I was able to start suricata (a snort-compatible rewrite that is multithreaded) to listen to both tap0 and tap1 on the monitoring system and see all the traffic. Very cool. :)

Now, this isn’t optimized for anything– sending 2 interfaces’ worth of traffic through one interface is likely going to result in droppoed packets to the monitor if the links are busy. Could maybe use compression to help there. Also, the server really needs to have an extra interface for vtun traffic that isn’t mirrored, otherwise you end up mirroring the vtun traffic itself (in my test environment, eth2 on the server used 10.0.2.1 and the montior used 10.0.2.2). Finally, vtun doesn’t offer any meaningful line encryption so if you are doing this for anything resembling production, you’re going to want to use a dedicated network segment between the server and the monitor (a crossover cable worked fine in my test environment).

Enjoy!


Filed under: security, ubuntu

Read more
jdstrand

Mirror, mirror…

Recently I wanted to mirror traffic from one interface to another. There is a cool utility from Martin Roesch (of Snort fame) called daemonlogger. It’s use is pretty simple (note I created the ‘daemonlogger user and group on my Ubuntu system since I wanted it to drop privileges after starting):
$ sudo /usr/bin/daemonlogger -u daemonlogger \
  -g daemonlogger -i eth0 -o eth1

Sure enough, examing tcpcump output, anything coming in on eth0 was mirrored to eth1. I then used a crossover cable and used tcpdump on the other system and it all worked as advertised. This could be very useful for an IDS.

Then I thought how cool it would be to do this for multiple interfaces, but then I quickly realized that would be confusing for whatever was trying to interpret that traffic. I read that someone used a combination of daemonlogger and vtun for a remote snort monitor to work in EC2, which got me playing….

In my test environment, I again used a crossover cable to connect two systems (let’s call them the ‘server’ and the ‘monitor’). The server system runs vtund. The monitor system then connects as a vtun client. When this happens, tap devices are created on the server and the monitor. The vtund server is setup to start daemonlogger on connect to mirror the traffic from eth0 to the tap0 device. The monitor system can then listen on the tap device and get all the traffic. I then created a second vtun tunnel and mirrored from eth1 to tap1. The monitor could monitor on both tap devices. Neat!

To do accomplish this, I used this /etc/vtund.conf on the server:options {
 type stand;
 port 5000;
 syslog daemon;
 ifconfig /sbin/ifconfig;
}
default {
  type ether;
  proto udp;
  compress no;
  encrypt no;
  multi yes;
  keepalive yes;
  srcaddr {
   iface eth2;
   addr 10.0.3.1;
  };
}
internal {
  password pass0;
  device tap0;
  up {
   ifconfig "%d up";
   program /usr/bin/daemonlogger "-u daemonlogger -g daemonlogger -i eth0 -o %d";
  };
  down {
   ifconfig "%d down";
   program /usr/bin/pkill "-f -x '/usr/bin/daemonlogger -u daemonlogger -g daemonlogger -i eth0 -o %d'";
  };
}
external {
  password pass1;
  device tap1;
  up {
   ifconfig "%d up";
   program /usr/bin/daemonlogger "-u daemonlogger -g daemonlogger -i eth1 -o %d";
  };
  down {
   ifconfig "%d down";
   program /usr/bin/pkill "-f -x '/usr/bin/daemonlogger -u daemonlogger -g daemonlogger -i eth1 -o %d'";
  };
}

Ther server’s /etc/default/vtun has:RUN_SERVER=yes
SERVER_ARGS="-P 5000"

And here is the client /etc/vtund.conf:options {
  port 5000;
  ifconfig /sbin/ifconfig;
}
default {
  type ether;
  proto udp;
  compress no;
  encrypt no;
  persist yes;
}
internal {
  device tap0;
  password pass0;
  up {
   ifconfig "%d up";
  };
  down {
   ifconfig "%d down";
  };
}
external {
  device tap1;
  password pass1;
  up {
   ifconfig "%d up";
  };
  down {
   ifconfig "%d down";
  };
}

The monitor system’s /etc/default/vtun looks like this:CLIENT0_NAME=internal
CLIENT0_HOST=10.0.2.1
CLIENT1_NAME=external
CLIENT1_HOST=10.0.2.1

With the above, I was able to start suricata (a snort-compatible rewrite that is multithreaded) to listen to both tap0 and tap1 on the monitoring system and see all the traffic. Very cool. :)

Now, this isn’t optimized for anything– sending 2 interfaces’ worth of traffic through one interface is likely going to result in droppoed packets to the monitor if the links are busy. Could maybe use compression to help there. Also, the server really needs to have an extra interface for vtun traffic that isn’t mirrored, otherwise you end up mirroring the vtun traffic itself (in my test environment, eth2 on the server used 10.0.2.1 and the montior used 10.0.2.2). Finally, vtun doesn’t offer any meaningful line encryption so if you are doing this for anything resembling production, you’re going to want to use a dedicated network segment between the server and the monitor (a crossover cable worked fine in my test environment).

Enjoy!


Filed under: security, ubuntu

Read more
jdstrand

A lot of exciting work has been going on with AppArmor and this multipart series will discuss where AppArmor is now, how it is currently used in Ubuntu and how it fits into the larger application isolation story moving forward.

Brief History and Background

AppArmor is a Mandatory Access Control (MAC) system which is a Linux Security Module (LSM) to confine programs to a limited set of resources. AppArmor’s security model is to bind access control attributes to programs rather than to users. AppArmor confinement is provided via profiles loaded into the kernel. AppArmor profiles can be in one of two modes: enforcement and complain. Profiles loaded in enforcement mode will result in enforcement of the policy defined in the profile as well as reporting policy violation attempts (either via syslog or auditd) such that what is not allowed in policy is denied. Profiles in complain mode will not enforce policy but instead report policy violation attempts. AppArmor is typically deployed on systems as a targeted policy where only some (eg high risk) applications have an AppArmor profile defined, but it also supports system wide policy.

Some defining characteristics of AppArmor are that it:

  • is root strong
  • is path-based
  • allows for mixing of enforcement and complain mode profiles
  • uses include files to ease development
  • is very lightweight in terms of resources
  • is easy to learn
  • is relatively easy to audit

AppArmor is an established technology first seen in Immunix and later integrated into SUSE and Ubuntu and their derivatives. AppArmor is also available in Debian, Mandriva, Arch, PLD, Pardus and others. Core AppArmor functionality is in the mainline Linux kernel starting with 2.6.36. AppArmor maintenance and development is ongoing.

An example AppArmor profile

Probably the easiest way to describe what AppArmor does and how it works is to look at an example, in this case the profile for tcpdump on Ubuntu 12.04 LTS:

#include <tunables/global>
/usr/sbin/tcpdump {
  #include <abstractions/base>
  #include <abstractions/nameservice>
  #include <abstractions/user-tmp>
  capability net_raw,
  capability setuid,
  capability setgid,
  capability dac_override,

  network raw,
  network packet,

  # for -D
  capability sys_module,
  @{PROC}/bus/usb/ r,
  @{PROC}/bus/usb/** r,

  # for finding an interface
  @{PROC}/[0-9]*/net/dev r,
  /sys/bus/usb/devices/ r,
  /sys/class/net/ r,
  /sys/devices/**/net/* r,

  # for tracing USB bus, which libpcap supports
  /dev/usbmon* r,
  /dev/bus/usb/ r,
  /dev/bus/usb/** r,

  # for init_etherarray(), with -e
  /etc/ethers r,

  # for USB probing (see libpcap-1.1.x/
  # pcap-usb-linux.c:probe_devices())
  /dev/bus/usb/**/[0-9]* w,

  # for -z
  /bin/gzip ixr,
  /bin/bzip2 ixr,

  # for -F and -w
  audit deny @{HOME}/.* mrwkl,
  audit deny @{HOME}/.*/ rw,
  audit deny @{HOME}/.*/** mrwkl,
  audit deny @{HOME}/bin/ rw,
  audit deny @{HOME}/bin/** mrwkl,
  owner @{HOME}/ r,
  owner @{HOME}/** rw,

  # for -r, -F and -w
  /**.[pP][cC][aA][pP] rw,

  # for convenience with -r (ie, read 
  # pcap files from other sources)
  /var/log/snort/*log* r,

  /usr/sbin/tcpdump r,

  # Site-specific additions and overrides. See 
  # local/README for details.
  #include <local/usr.sbin.tcpdump>
}

This profile is representative of traditional AppArmor profiling for a program that processes untrusted input over the network. As can be seen:

  • profiles are simple text files
  • comments are supported in the profile
  • absolute paths as well as file globbing can be used when specifying file access
  • various access controls for files are present. From the profile we see ‘r’ (read), ‘w’ (write), ‘m’ (memory map as executable), ‘k’ (file locking), ‘l’ (creation hard links), and ‘ix’ to execute another program with the new program inheriting policy. Other access rules also exists such as ‘Px’ (execute under another profile, after cleaning the environment), ‘Cx’ (execute under a child profile, after cleaning the environment), and ‘Ux’ (execute unconfined, after cleaning the environment)
  • access controls for capabilities are present
  • access controls for networking are present
  • explicit deny rules are supported, to override other allow rules (eg access to @{HOME}/bin/bad.sh is denied with auditing due to ‘audit deny @{HOME}/bin/** mrwkl,’ even though general access to @{HOME} is permitted with ‘@{HOME}/** rw,’)
  • include files are supported to ease development and simplify profiles (ie #include <abstractions/base>, #include <abstractions/nameservice>, #include <abstractions/user-tmp>, #include <local/usr.sbin.tcpdump>)
  • variables can be defined and manipulated outside the profile (#include <tunables/global> for @{PROC} and @{HOME})
  • AppArmor profiles are fairly easy to read and audit

Complete information on the profile language can be found in ‘man 5 apparmor.d’ as well as the AppArmor wiki.

Updating and creating profiles

AppArmor uses the directory heirarchy as described in policy layout, but most of the time, you are either updating an existing profile or creating a new one and so the files you most care about are in /etc/apparmor.d.

The AppArmor wiki has a lot of information on debugging and updating existing profiles. AppArmor denials are logged to /var/log/kern.log (or /var/log/audit/audit.log if auditd is installed). If an application is misbehaving and you think it is because of AppArmor, check the logs first. If there is an AppArmor denial, adjust the policy in /etc/apparmor.d, then reload the policy and restart the program like so:

$ sudo apparmor_parser -R /etc/apparmor.d/usr.bin.foo
$ sudo apparmor_parser -a /etc/apparmor.d/usr.bin.foo
$ <restart application>

Oftentimes it is enough to just reload the policy without unloading/loading the profile or restarting the application:

$ sudo apparmor_parser -r /etc/apparmor.d/usr.bin.foo

Creating a profile can be done either with tools or by hand. Due to the current pace and focus of development, the tools are somewhat behind and lack some features. It is generally recommended that you profile by hand instead.

When profiling, keep in mind that:

  • AppArmor provides an additional permission check to traditional Discretionary Access Controls (DAC). DAC is always checked in addition to the AppArmor permission checks. As such, AppArmor cannot override DAC to provide more access than what would be normally allowed.
  • AppArmor normalizes path names. It resolves symlinks and considers each hard link as a different access path.
  • AppArmor evaluates file access by pathname rather than using on disk labeling. This eases profiling since AppArmor handles all the labelling behind the scenes.
  • Deny rules are evaluated after allow rules and cannot be overridden by an allow rule.
  • Creation of files requires the create permission (implied by w) on the path to be created. Separate rules for writing to the directory of where the file resides are not required. Deletion works like creation but requires the delete permission (implied by w). Copy requires ‘r’ of the source with create and write at the destination (implied by w). Move is like copy, but also requires delete at source.
  • The profile must be loaded before an application starts for the confinement to take effect. You will want to make sure that you load policy during boot before any confined daemons or applications.
  • The kernel will rate limit AppArmor denials which can cause problems while profiling. You can avoid this be installing auditd or by adjusting rate limiting in the kernel:
    $ sudo sysctl -w kernel.printk_ratelimit=0

Resources

There is a lot of documentation on AppArmor (though some is still in progress):

Next time I’ll discuss the specifics of how Ubuntu uses AppArmor in the distribution.

Thanks to Seth Arnold and John Johansen for their review.


Filed under: canonical, security, ubuntu

Read more
jdstrand

A lot of exciting work has been going on with AppArmor and this multipart series will discuss where AppArmor is now, how it is currently used in Ubuntu and how it fits into the larger application isolation story moving forward.

Brief History and Background

AppArmor is a Mandatory Access Control (MAC) system which is a Linux Security Module (LSM) to confine programs to a limited set of resources. AppArmor’s security model is to bind access control attributes to programs rather than to users. AppArmor confinement is provided via profiles loaded into the kernel. AppArmor profiles can be in one of two modes: enforcement and complain. Profiles loaded in enforcement mode will result in enforcement of the policy defined in the profile as well as reporting policy violation attempts (either via syslog or auditd) such that what is not allowed in policy is denied. Profiles in complain mode will not enforce policy but instead report policy violation attempts. AppArmor is typically deployed on systems as a targeted policy where only some (eg high risk) applications have an AppArmor profile defined, but it also supports system wide policy.

Some defining characteristics of AppArmor are that it:

  • is root strong
  • is path-based
  • allows for mixing of enforcement and complain mode profiles
  • uses include files to ease development
  • is very lightweight in terms of resources
  • is easy to learn
  • is relatively easy to audit

AppArmor is an established technology first seen in Immunix and later integrated into SUSE and Ubuntu and their derivatives. AppArmor is also available in Debian, Mandriva, Arch, PLD, Pardus and others. Core AppArmor functionality is in the mainline Linux kernel starting with 2.6.36. AppArmor maintenance and development is ongoing.

An example AppArmor profile

Probably the easiest way to describe what AppArmor does and how it works is to look at an example, in this case the profile for tcpdump on Ubuntu 12.04 LTS:

#include <tunables/global>
/usr/sbin/tcpdump {
  #include <abstractions/base>
  #include <abstractions/nameservice>
  #include <abstractions/user-tmp>
  capability net_raw,
  capability setuid,
  capability setgid,
  capability dac_override,

  network raw,
  network packet,

  # for -D
  capability sys_module,
  @{PROC}/bus/usb/ r,
  @{PROC}/bus/usb/** r,

  # for finding an interface
  @{PROC}/[0-9]*/net/dev r,
  /sys/bus/usb/devices/ r,
  /sys/class/net/ r,
  /sys/devices/**/net/* r,

  # for tracing USB bus, which libpcap supports
  /dev/usbmon* r,
  /dev/bus/usb/ r,
  /dev/bus/usb/** r,

  # for init_etherarray(), with -e
  /etc/ethers r,

  # for USB probing (see libpcap-1.1.x/
  # pcap-usb-linux.c:probe_devices())
  /dev/bus/usb/**/[0-9]* w,

  # for -z
  /bin/gzip ixr,
  /bin/bzip2 ixr,

  # for -F and -w
  audit deny @{HOME}/.* mrwkl,
  audit deny @{HOME}/.*/ rw,
  audit deny @{HOME}/.*/** mrwkl,
  audit deny @{HOME}/bin/ rw,
  audit deny @{HOME}/bin/** mrwkl,
  owner @{HOME}/ r,
  owner @{HOME}/** rw,

  # for -r, -F and -w
  /**.[pP][cC][aA][pP] rw,

  # for convenience with -r (ie, read 
  # pcap files from other sources)
  /var/log/snort/*log* r,

  /usr/sbin/tcpdump r,

  # Site-specific additions and overrides. See 
  # local/README for details.
  #include <local/usr.sbin.tcpdump>
}

This profile is representative of traditional AppArmor profiling for a program that processes untrusted input over the network. As can be seen:

  • profiles are simple text files
  • comments are supported in the profile
  • absolute paths as well as file globbing can be used when specifying file access
  • various access controls for files are present. From the profile we see ‘r’ (read), ‘w’ (write), ‘m’ (memory map as executable), ‘k’ (file locking), ‘l’ (creation hard links), and ‘ix’ to execute another program with the new program inheriting policy. Other access rules also exists such as ‘Px’ (execute under another profile, after cleaning the environment), ‘Cx’ (execute under a child profile, after cleaning the environment), and ‘Ux’ (execute unconfined, after cleaning the environment)
  • access controls for capabilities are present
  • access controls for networking are present
  • explicit deny rules are supported, to override other allow rules (eg access to @{HOME}/bin/bad.sh is denied with auditing due to ‘audit deny @{HOME}/bin/** mrwkl,’ even though general access to @{HOME} is permitted with ‘@{HOME}/** rw,’)
  • include files are supported to ease development and simplify profiles (ie #include <abstractions/base>, #include <abstractions/nameservice>, #include <abstractions/user-tmp>, #include <local/usr.sbin.tcpdump>)
  • variables can be defined and manipulated outside the profile (#include <tunables/global> for @{PROC} and @{HOME})
  • AppArmor profiles are fairly easy to read and audit

Complete information on the profile language can be found in ‘man 5 apparmor.d’ as well as the AppArmor wiki.

Updating and creating profiles

AppArmor uses the directory heirarchy as described in policy layout, but most of the time, you are either updating an existing profile or creating a new one and so the files you most care about are in /etc/apparmor.d.

The AppArmor wiki has a lot of information on debugging and updating existing profiles. AppArmor denials are logged to /var/log/kern.log (or /var/log/audit/audit.log if auditd is installed). If an application is misbehaving and you think it is because of AppArmor, check the logs first. If there is an AppArmor denial, adjust the policy in /etc/apparmor.d, then reload the policy and restart the program like so:

$ sudo apparmor_parser -R /etc/apparmor.d/usr.bin.foo
$ sudo apparmor_parser -a /etc/apparmor.d/usr.bin.foo
$ <restart application>

Oftentimes it is enough to just reload the policy without unloading/loading the profile or restarting the application:

$ sudo apparmor_parser -r /etc/apparmor.d/usr.bin.foo

Creating a profile can be done either with tools or by hand. Due to the current pace and focus of development, the tools are somewhat behind and lack some features. It is generally recommended that you profile by hand instead.

When profiling, keep in mind that:

  • AppArmor provides an additional permission check to traditional Discretionary Access Controls (DAC). DAC is always checked in addition to the AppArmor permission checks. As such, AppArmor cannot override DAC to provide more access than what would be normally allowed.
  • AppArmor normalizes path names. It resolves symlinks and considers each hard link as a different access path.
  • AppArmor evaluates file access by pathname rather than using on disk labeling. This eases profiling since AppArmor handles all the labelling behind the scenes.
  • Deny rules are evaluated after allow rules and cannot be overridden by an allow rule.
  • Creation of files requires the create permission (implied by w) on the path to be created. Separate rules for writing to the directory of where the file resides are not required. Deletion works like creation but requires the delete permission (implied by w). Copy requires ‘r’ of the source with create and write at the destination (implied by w). Move is like copy, but also requires delete at source.
  • The profile must be loaded before an application starts for the confinement to take effect. You will want to make sure that you load policy during boot before any confined daemons or applications.
  • The kernel will rate limit AppArmor denials which can cause problems while profiling. You can avoid this be installing auditd or by adjusting rate limiting in the kernel:
    $ sudo sysctl -w kernel.printk_ratelimit=0

Resources

There is a lot of documentation on AppArmor (though some is still in progress):

Next time I’ll discuss the specifics of how Ubuntu uses AppArmor in the distribution.

Thanks to Seth Arnold and John Johansen for their review.


Filed under: canonical, security, ubuntu

Read more
jdstrand

While this may be old news to some, I only just now figured out how to conveniently use multiple identities in OpenSSH. I have several ssh keys, but only two that I want to use with the agent: one for personal use and one for work. I’d like to be able to not specify which identity to use on the command line most of the time and just use ssh like so:

$ ssh <personal>
$ ssh <work>
$ ssh -i ~/.ssh/other.id_rsa <other>

where the first two use the agent with ~/.ssh/id_rsa and ~/.ssh/work.id_rsa respectively, and the last does not use the agent. ‘man ssh_config’ tells me that the agent looks at all the different IdentityFile configuration directives (in order) and also the IdentitiesOnly option. Therefore, I can set up my ~/.ssh/config to have something like:
# This makes it so that only my default identity
# and my work identity are used by the agent
IdentitiesOnly yes
IdentityFile ~/.ssh/id_rsa
IdentityFile ~/.ssh/work.id_rsa

# Default to using my work key with work domains
Host *.work1.com *.work2.com
    IdentityFile ~/.ssh/work.id_rsa

With the above, it all works the way I want. Cool! :)


Filed under: security, ubuntu

Read more
jdstrand

While this may be old news to some, I only just now figured out how to conveniently use multiple identities in OpenSSH. I have several ssh keys, but only two that I want to use with the agent: one for personal use and one for work. I’d like to be able to not specify which identity to use on the command line most of the time and just use ssh like so:

$ ssh <personal>
$ ssh <work>
$ ssh -i ~/.ssh/other.id_rsa <other>

where the first two use the agent with ~/.ssh/id_rsa and ~/.ssh/work.id_rsa respectively, and the last does not use the agent. ‘man ssh_config’ tells me that the agent looks at all the different IdentityFile configuration directives (in order) and also the IdentitiesOnly option. Therefore, I can set up my ~/.ssh/config to have something like:
# This makes it so that only my default identity
# and my work identity are used by the agent
IdentitiesOnly yes
IdentityFile ~/.ssh/id_rsa
IdentityFile ~/.ssh/work.id_rsa

# Default to using my work key with work domains
Host *.work1.com *.work2.com
    IdentityFile ~/.ssh/work.id_rsa

With the above, it all works the way I want. Cool! :)


Filed under: security, ubuntu

Read more
pitti

New PostgreSQL microreleases with two security fixes and several bug fixes was just announced publically.

I spent the morning with the packaging orgy for Debian unstable and experimental (now uploaded), Debian Wheezy (update sent to security team), Ubuntu hardy, lucid, natty, oneiric, precise (LP #1008317) and my backports PPA.

I tested these fairly thoroughly, but please let me know if you encounter any problem with these.

Read more
Dustin Kirkland

I stepped away from a busy schedule of awesome sessions the Ubuntu Developer Summit in Oakland, CA to speak for a few minutes about the requirement of "openness" in modern Cloud Computing, the absolute necessity of security and encryption of data, and benefits of Ubuntu as both a Cloud host and guest. Enjoy!








If you're interested in learning more about security considerations when planning your cloud or big data deployment, consider subscribing to Gazzang's blog feed, or reading some of our white papers.

Cheers!
:-Dustin

Read more
jdstrand

I just learned to my chagrin that wordpress.com would update my atom feed every time I clicked the ‘Update’ button in my blog. As a result, I have recently been spamming Planet Ubuntu pretty heavily lately, for which I apologize. Thanks to jcastro and especially nigelb for figuring out that the fix was simply to have the planet use the rss feed instead of the atom feed. While this post/apology could be argued to be yet more spam, I thought I’d pass this information along so others might check their feeds and adjust accordingly. The wiki has been updated for this change, and you can reference the change made to my planet-ubuntu settings in Launchpad if you want to check your settings.


Filed under: canonical, security, ubuntu

Read more
jdstrand

I just learned to my chagrin that wordpress.com would update my atom feed every time I clicked the ‘Update’ button in my blog. As a result, I have recently been spamming Planet Ubuntu pretty heavily lately, for which I apologize. Thanks to jcastro and especially nigelb for figuring out that the fix was simply to have the planet use the rss feed instead of the atom feed. While this post/apology could be argued to be yet more spam, I thought I’d pass this information along so others might check their feeds and adjust accordingly. The wiki has been updated for this change, and you can reference the change made to my planet-ubuntu settings in Launchpad if you want to check your settings.


Filed under: canonical, security, ubuntu

Read more
mandel

In the last post I explained how to set the security attributes of a file on Windows. What naturally follows such a post is explaining how to implement the os.access method that takes into account such settings because the default implementation of python will ignore them. Lets first define when does a user have read access in our use case:

I user has read access if the user sid has read access our the sid of the ‘Everyone’ group has read access.

The above also includes any type of configuration like rw or rx. In order to be able to do this we have to understand how does Windows NT set the security of a file. On Windows NT the security of a file is set by using a bitmask of type DWORD which can be compared to a 32 bit unsigned long in ANSI C, and this is as far as the normal things go, let continue with the bizarre Windows implementation. For some reason I cannot understand the Windows developers rather than going with the more intuitive solution of using a bit per right, they instead, have decided to use a combination of bits per right. For example, to set the read flag 5 bits have to be set, for the write flag they use 6 bits and for the execute 4 bits are used. To make matters more simple the used bitmask overlap, that is if we remove the read flag we will be removing bit for the execute mask, and there is no documentation to be found about the different masks that are used…

Thankfully for use the cfengine project has had to go through this process already and by trial an error discovered the exact bits that provide the read rights. Such a magic number is:

0xFFFFFFF6

Therefore we can easily and this flag to an existing right to remove the read flag. The number also means that the only import bit that we are interested in are bits 0 and 3 which when set mean that the read flag was added. To make matters more complicated the ‘Full Access’ rights does not use such flag. In order to know if a user has the Full Access rights we have to look at bit 28 which if set does represent the ‘Full Access’ flag.

So to summarize, to know if a user has the read flag we have to look at bit 28 to test for the ‘Full Access’ flag, if the ‘Full Access’ was not granted we have to look at bits 0 and 3 and when both of them are set the usre has the read flag, easy right ;) . Now to the practical example, the bellow code does exactly what I just explained using python and the win32api and win32security modules.

from win32api import GetUserName
 
from win32security import (
    LookupAccountName,
    LookupAccountSid,
    GetFileSecurity,
    SetFileSecurity,
    ACL,
    DACL_SECURITY_INFORMATION,
    ACL_REVISION
)
from ntsecuritycon import (
    FILE_ALL_ACCESS,
    FILE_GENERIC_EXECUTE,
    FILE_GENERIC_READ,
    FILE_GENERIC_WRITE,
    FILE_LIST_DIRECTORY
)
 
platform = 'win32'
 
EVERYONE_GROUP = 'Everyone'
ADMINISTRATORS_GROUP = 'Administrators'
 
def _int_to_bin(n):
    """Convert an int to a bin string of 32 bits."""
    return "".join([str((n >> y) & 1) for y in range(32-1, -1, -1)])
 
def _has_read_mask(number):
    """Return if the read flag is present."""
    # get the bin representation of the mask
    binary = _int_to_bin(number)
    # there is actual no documentation of this in MSDN but if bt 28 is set,
    # the mask has full access, more info can be found here:
    # http://www.iu.hio.no/cfengine/docs/cfengine-NT/node47.html
    if binary[28] == '1':
        return True
    # there is no documentation in MSDN about this, but if bit 0 and 3 are true
    # we have the read flag, more info can be found here:
    # http://www.iu.hio.no/cfengine/docs/cfengine-NT/node47.html
    return binary[0] == '1' and binary[3] == '1'
 
def access(path):
    """Return if the path is at least readable."""
    # for a file to be readable it has to be readable either by the user or
    # by the everyone group
    security_descriptor = GetFileSecurity(path, DACL_SECURITY_INFORMATION)
    dacl = security_descriptor.GetSecurityDescriptorDacl()
    sids = []
    for index in range(0, dacl.GetAceCount()):
        # add the sid of the ace if it can read to test that we remove
        # the r bitmask and test if the bitmask is the same, if not, it means
        # we could read and removed it.
        ace = dacl.GetAce(index)
        if _has_read_mask(ace[1]):
            sids.append(ace[2])
    accounts = [LookupAccountSid('',x)[0] for x in sids]
    return GetUserName() in accounts or EVERYONE_GROUP in accounts

When I wrote this my brain was in a WTF state so I’m sure that the horrible _int_to_bin function can be exchanged by the bin build in function from python. If you fancy doing it I would greatly appreciate it I cannot take this any longer ;)

Read more
mandel

While working on making the Ubuntu One code more multiplatform I founded myself having to write some code that would set the attributes of a file on Windows. Ideally os.chmod would do the trick, but of course this is windows, and it is not fully supported. According to the python documentation:

Note: Although Windows supports chmod(), you can only set the file’s read-only flag with it (via the stat.S_IWRITE and stat.S_IREAD constants or a corresponding integer value). All other bits are ignored.

Grrrreat… To solve this issue I have written a small function that will allow to set the attributes of a file by using the win32api and win32security modules. This solves partially the issues since 0444 and others cannot be perfectly map to the Windows world. In my code I have made the assumption that using the groups ‘Everyone’, ‘Administrators’ and the user name would be close enough for our use cases.

Here is the code in case anyone has to go through this:

from win32api import MoveFileEx, GetUserName
 
from win32file import (
    MOVEFILE_COPY_ALLOWED,
    MOVEFILE_REPLACE_EXISTING,
    MOVEFILE_WRITE_THROUGH
)
from win32security import (
    LookupAccountName,
    GetFileSecurity,
    SetFileSecurity,
    ACL,
    DACL_SECURITY_INFORMATION,
    ACL_REVISION
)
from ntsecuritycon import (
    FILE_ALL_ACCESS,
    FILE_GENERIC_EXECUTE,
    FILE_GENERIC_READ,
    FILE_GENERIC_WRITE,
    FILE_LIST_DIRECTORY
)
 
EVERYONE_GROUP = 'Everyone'
ADMINISTRATORS_GROUP = 'Administrators'
 
def _get_group_sid(group_name):
    """Return the SID for a group with the given name."""
    return LookupAccountName('', group_name)[0]
 
 
def _set_file_attributes(path, groups):
    """Set file attributes using the wind32api."""
    security_descriptor = GetFileSecurity(path, DACL_SECURITY_INFORMATION)
    dacl = ACL()
    for group_name in groups:
        # set the attributes of the group only if not null
        if groups[group_name]:
            group_sid = _get_group_sid(group_name)
            dacl.AddAccessAllowedAce(ACL_REVISION, groups[group_name],
                group_sid)
    # the dacl has all the info of the dff groups passed in the parameters
    security_descriptor.SetSecurityDescriptorDacl(1, dacl, 0)
    SetFileSecurity(path, DACL_SECURITY_INFORMATION, security_descriptor)
 
def set_file_readonly(path):
    """Change path permissions to readonly in a file."""
    # we use the win32 api because chmod just sets the readonly flag and
    # we want to have imore control over the permissions
    groups = {}
    groups[EVERYONE_GROUP] = FILE_GENERIC_READ
    groups[ADMINISTRATORS_GROUP] = FILE_GENERIC_READ
    groups[GetUserName()] = FILE_GENERIC_READ
    # the above equals more or less to 0444
    _set_file_attributes(path, groups)

For those who might want to remove the read access from a group, you just have to not pass the group in the groups parameter which would remove the group from the security descriptor.

Read more
pitti

For a test suite I need to create a local SSL-enabled HTTPS server in my Python project. I googled around and found various recipes using pyOpenSSL, but all of those are quite complicated, and I didn’t even get the referenced one to work.

Also, Python has shipped its own built-in SSL module for quite a while. After reading some docs and playing around, I eventually got it to work with a remarkably simple piece of code using the builtin ssl module:

import BaseHTTPServer, SimpleHTTPServer
import ssl

httpd = BaseHTTPServer.HTTPServer(('localhost', 4443), SimpleHTTPServer.SimpleHTTPRequestHandler)
httpd.socket = ssl.wrap_socket (httpd.socket, certfile='path/to/localhost.pem', server_side=True)
httpd.serve_forever()

(I use port 4443 so that I can run the tests as normal user; the usual port 443 requires root privileges).

Way to go, Python!

Read more
jdstrand

So, like a lot people, I get asked to install Ubuntu on friend’s and family’s computers. I talk to them about what their use cases are and more often than not (by far) I install it on their systems. That’s cool. What is less cool is tech support for said installation. Not that I mind doing it or that there is a lot to do, but what becomes problematic is when they go home and are sitting behind their NAT router from their ISP, and I can’t just connect to them to fix something I forgot or to help them out of a jam. Before I go any farther let me say that what I am going to describe is never done on machines without the owner’s permission and that I am always upfront in that my account on his/her machine is an administrative account that has access to everything on the system (barring any encryption they use, of course). I should also mention that what I am describing is more in the ‘fun hack that other people might like’ category, and not in the ‘serious systems administration’ category. In other words, this is total crack, but it is fun crack. :)

Now, there are a lot of ways to do this, and I have tried some and surely missed many others. Here are a few I’ve tried:

  • Giving realtime tech support over the phone
  • Giving realtime tech support over some secure/encrypted chat mechanism
  • Email support
  • OpenSSH access from my machine to theirs, including adjusting their router for port forwarding
  • VPN access from their machine to my network, at which point I can OpenSSH to their machine

Rather than going through a comparison of all the different techniques, let’s just say they all have issues: realtime support almost always entails describing some obscure incantation to fix the problem, which is neither confidence inspiring for them and is extremely time consuming. Email is too slow. Routers get reset and straight OpenSSH doesn’t work when they are away from home. VPN access is not bad, especially with the use of OpenVPN client and server certificates. It has the added benefit of being opt-in by the user, and is easy to use with the network-manager-openvpn-gnome package. It is probably the most legitimate form of access, and should be highly considered, especially for corporate environments. The only real issues are that some draconian networks will block this VPN traffic and that my IP happens to change so they have to fiddle with their connection setting. So I started doing something different.

Remotely triggered reverse OpenSSH connections
The basic idea is this: on the client install OpenSSH, harden it a bit, install a firewall so that no one can connect to it, then create a cron job to poll an HTTP server you have access to for an IP address to connect to, then create a reverse SSH connection to that IP address. If this sounds a little shady and a bit like a botnet, well, you’d be right, but again, I did ask for permission first. :)

So, more specifically, on the remote machine (ie, the one you want to administer):

  • Install openssh-server and set /etc/ssh/sshd_config with the following:
    # Force key authentication (ie, no passwords)
    PasswordAuthentication no
    # Only allow logins to my account
    AllowUsers me
    Obviously, you will need to copy your ssh key over to this machine (man ssh-copy-id) before restarting OpenSSH and putting the above into effect.
  • Setup a firewall. Eg:$ sudo ufw enable
  • Create some passwordless ssh keys. Eg:
    $ ssh-keygen -f revssh.id_rsa
  • Create a script to poll some HTTP server, then create the reverse connection (this is an abridged script. I’ve omitted error checking and locking for brevity):#!/bin/sh
    hname=`hostname | cut -f 1 -d '.'`
    ip=`elinks -no-home -dump "http://your.web.server/$hname" | head -1 | awk '{print $1}' | egrep '^[1-9][0-9\.]*$'` || {
    #echo "Could not obtain ip" >&2
    exit 0
    }
    echo "Connecting to $ip"
    # we use StrictHostKeyChecking=no so that keys gets added without prompting
    ssh -oStrictHostKeyChecking=no -i $HOME/.ssh/revssh.id_rsa -p 8080 -NR 3333:localhost:22 revssh@"$ip" sleep 30 || true

To remotely administer a machine, create a file on the server with the name of the hostname of the client to have only the external IP address of your network in it, and then the remote client will pick it up and try to setup a reverse ssh connection on port 8080 (usually open even on the most draconian firewalls, but you could also use 53/tcp, 80/tcp or 443/tcp), after which you can connect to the remote machine with something like:$ ssh -t -p 3333 localhost

Caveats
This is hardly perfect. For one thing, the client is polling an HTTP server so it can easily be man-in-the-middled, but that isn’t a big deal because even if the attacker had full knowledge of this technique, all it gives is a connection to OpenSSH on the client, which is configured to only allow connections from ‘me’ and with my ssh key. This could of course be fixed by connecting with HTTPS and using connection.ssl.cert_verify=1 with elinks. Similarly, the HTTP server could be subverted and under attacker control. For the client, this is no different than the MITM attack in that the attacker really doesn’t have much to work with due to the OpenSSH configuration. Also, the client completely ignores the fingerprint of the server it is connecting to, but again, not huge deal because of our OpenSSH configuration on the client, but you will need to be extra careful in checking fingerprints when connecting to the reverse connection.

You need to remember to update your webserver (ie, just remove the file) so the client isn’t always trying to connect to you. Also, it is somewhat inconvenient that when you logout the reverse connection is still there, but instead of using ‘exit’ to logout of the remote machine, use something like: $ kill -9 `ps auxww | grep [s]sh | grep 3333 | awk '{print $2}'`

While is it relatively easy to setup the client, it is somewhat harder to setup the initiating end. First off, you need to have a webserver that can be accessed by the client. Then you need to have the ssh server on your local machine listen on port 8080 (done either via a port redirect or a separate ssh server). You also need to setup a non-privileged ‘revssh’ user (eg, /bin/false for the shell, a disabled password, etc) on your machine. If you are behind a firewall/router you need to allow connections through your firewall to your machine so that the connection to port 8080 from the client is not blocked. Finally, if you remotely administer multiple clients you will want to keep track of their ssh fingerprints, because when you connect to ‘-p 3333 localhost’ they will conflict with each other (most annoying, but workable). I have written a ‘revssh_allow’ script to automate the above for me (not included, as it is highly site-specific). It will: fire up an sshd server on port 8080 that is specially configured for this purpose, adjust my local firewall to open port 8080/tcp from the client, connect to my router to set up the port redirection to my machine, then poll (via ‘netstat -atn | grep “:3333.*LISTEN”‘) for the connection from the client, then remind me how to connect to the client and how to properly kill the connection.

Summary
So yeah, this is a fun hack. Is it something to put in production? Probably not. Does it work for administering friend’s and family’s computers? Absolutely, but I’ll have to see how well it works over the long haul.

Have fun!


Filed under: security, ubuntu

Read more
jdstrand

A proposed security update for chromium-browser on Ubuntu 10.04 LTS is available. If you are able, please test and comment in https://launchpad.net/bugs/602142.


Filed under: security, ubuntu

Read more
jdstrand

Fabien Tassin (fta) has prepared a security update for chromium-browser for Ubuntu 10.04 LTS (Lucid Lynx). Please test and provide positive and/or negative feedback in: https://bugs.launchpad.net/ubuntu/+source/chromium-browser/+bug/591474.

This update addresses the following upstream issues:
http://googlechromereleases.blogspot.com/2010/06/stable-channel-update.html.

Thanks Fabien!


Filed under: security, ubuntu

Read more
Ara Pulido

Background

Firefox 3.0 and xulrunner 1.9 are now unsupported by Mozilla. Rather than backporting security fixes to these now, we are moving to a support model where we will be introducing major new upstream versions in stable releases. The reason for this is the support periods from Mozilla are gradually becoming shorter, and it will be more and more difficult for us to maintain our current support model in the future.

What we are going to do

We are going to release Firefox 3.6.4 as a minor update to the 3.6 series in Lucid. This will also be rolled out to Hardy, Jaunty and Karmic (along with xulrunner 1.9.2.4). The update for Lucid is quite trivial, but the update in Hardy, Jaunty and Karmic is not quite as simple.

Before releasing these updates to the public, we need testing in Firefox, the extensions in the archive and distributions upgrades after those updates. We have published all these packages in a PPA and we will track test results before moving anything to the archive.

How you can help

We need people running *Hardy* (Jaunty and Karmic will see a similar call for testing in the following days) in bare metal or a virtual machine. If you are willing to help, you can follow the instructions below:

  1. Add the Mozilla Security PPA to your software sources

    You need to manually edit your /etc/apt/sources.list and add the following lines:


    deb http://ppa.launchpad.net/ubuntu-mozilla-security/ppa/ubuntu hardy main
    deb-src http://ppa.launchpad.net/ubuntu-mozilla-security/ppa/ubuntu hardy main

    After saving the file, you have to run:


    sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 7EBC211F
    sudo apt-get update
    sudo apt-get dist-upgrade

  2. You have to have an account in our tracking system. Go to http://mozilla.qa.ubuntu.com and click on “Log In” and “Create New Account”
  3. Explore your Firefox installation

    Basically we want people to perform the same activities that the do daily, without issues. In order to make testing easier, this is a checklist of things worth looking:

    • The upgrade to Firefox 3.6.4 goes smoothly.
    • The extensions get upgraded as well.
    • All the Firefox plugins (i.e. Flash, Java) still work.
    • The extensions work correctly.
    • Full distributions upgrades are not broken.
    • Upgrades work with only the security pocket enabled (ie, hardy-updates disabled)

To report your findings you need to use the test tracker.

Once you have selected the Hardy image, you will see a set of “testcases”, with a summary of how many reports have been sent. Obviously, the most important one is “Firefox”.

List of testcases

Once you open one of the testcases, you will be able to report back your findings if something went wrong. Even if everything went fine, it is always good to report back the success (“Passed”) with a comment on the activities you performed.

Report back

Use the “Firefox” testcase for general testing (upgrade, rendering, plugins, etc.) and the rest of the testcases if you want to report something more specific (Upgrades to Lucid, specific extensions errors, etc.)

IMPORTANT!! How to file bugs

As we are testing a PPA, not an official Ubuntu package, if you find an issue is NOT OK to file a bug in Launchpad. Rather than doing so, please, just explain your issue in the Comments field of the tracker and mark the test as Failed.

The tracker requires a bug number in order to mark a test as Failed. To bypass this requirement, just use the bug number “1″ ;-)

Thanks for helping to maintain secured Ubuntu stable releases!


Read more
jdstrand

I make use of the Master Password feature in Firefox. While not on by default, when enabled this feature encrypts your Firefox saved passwords on disk, and Firefox will prompt you when you need access to a saved password. When your browser is not running, your passwords are safe. There is a tool to try to brute force your master password if your machine is stolen, but as long as you use a strong password you should be ok (or at the very least, give you time to change them). For more information, see http://kb.mozillazine.org/Master_password.

This is a nice feature, and one which Chromium lacks. If you let Chromium save your passwords, they are stored in the ‘~/.config/chromium/Default/Web Data’ sqlite database. Displaying them is surprisingly easy (this is 5.0.342.9~r43360-0ubuntu2 on Ubuntu 10.04 LTS, newer versions may save them somewhere else):

$ echo 'SELECT username_value, password_value FROM logins;' | sqlite3 ~/.config/chromium/Default/Web\ Data | grep -v '^|$'
username|password
username2|password2

As you can see, in essence your passwords are stored in plain text on your disk (though the ~/.config/chromium directory does have 0700 permissions). I won’t go into the reasons why Google hasn’t implemented this feature yet since people can read the bug, but it seems clear that:

  • Google is not going to fix this anytime soon
  • People need a way to protect themselves

There are some alternatives with LastPass and RoboForm, but these apparently require you to store your passwords online (I’ve not verified this personally). As it stands, there is not a way to lock your saved passwords, so I encourage Chromium users to encrypt their data using eCryptfs or LUKS full disk encryption so that at least when you turn off your computer the passwords are not readily available. In Ubuntu, you can:

  • setup LUKS full disk encryption using the alternate installer
  • setup an encrypted home directory in all the Desktop and Server installers (or migrate an existing home directory by using ‘ecryptfs-migrate-home’)
  • setup an encrypted private directory using ‘ecryptfs-setup-private’ (if you go this route, you’ll want to move ~/.config/chromium and ~/.cache/chromium into the encrypted directory and use symlinks to point to them)

In this scenario, normal DAC permissions will protect your passwords on multiuser systems (though you’ll need to be careful about the security of backups) and encrypted disks/folders will protect them in the case of theft. As always, please be vigilant about screen locking when you leave your computer while logged in though….


Filed under: security, ubuntu

Read more
jdstrand

Upstream ClamAV pushed out an update via freshclam that crashed versions of 0.95 and earlier on 32 bit systems (Ubuntu 9.10 and earlier are affected). Upstream issued an update via freshclam within 15 minutes, but affected users’ clamd daemon will not restart automatically. People running ClamAV should check that it is still running. For details see:

http://lurker.clamav.net/message/20100507.110656.573e90d7.en.html


Filed under: security, ubuntu, ubuntu-server

Read more
pitti

Yesterday PostgreSQL released new security/bug fix microreleases 8.4.2, 8.3.9, and 8.1.19, which fix two security issues and a whole bunch of bugs.

Updates for all supported Ubuntu releases are built in the ubuntu-security-proposed PPA. They pass the upstream and postgresql-common test suites, but more testing is heavily appreciated! Please give feedback in bug LP#496923.

Thanks!

Read more