Canonical Voices

Posts tagged with 'linux'

Prakash

From:  http://www.slideshare.net/blackducksoftware/the-2013-future-of-open-source-survey-results

Black Duck and North Bridge announce the results of the seventh annual Future of Open Source Survey. The 2013 survey represents the insights of more than 800 respondents – the largest in the survey’s history – from both non-vendor and vendor communities.

Read more
Prakash

Netflix, the popular video-streaming service that takes up a third of all internet traffic during peak traffic hours isn’t just the single largest internet traffic service. Netflix, without doubt, is also the largest pure cloud service.

netflixcloud-620x457
Netflix, with more than a billion video delivery instances per month, is the largest cloud application in the world.

At the Linux Foundation’s Linux Collaboration Summit in San Francisco, California, Adrian Cockcroft, director of architecture for Netflix’s cloud systems team, after first thanking everyone “for building the internet so we can fill it with movies”, said that Netflix’s Linux, FreeBSD, and open-source based services are “cloud native”.

By this, Cockcroft meant that even with more than a billion video instances delivered every month over the internet, “there is no datacenter behind Netflix”. Instead, Netflix, which has been using Amazon Web Services since 2009 for some of its services, moved its entire technology infrastructure to AWS in November 2012.

Read More.

Read more
facundo

Linux Containers


A nivel de máquinas virtuales de uso genérico (por lo tanto descartando ScummVM o cosas similares) siempre me manejé con VirtualBox. Aunque ahora es de Oracle y no lo miro con buenos ojos, siempre funcionó bastante bien (sin pedirles cosas demasiado locas) y es una buena manera de tener un Windor corriendo aunque uno esté todo el día en Linux (por ejemplo, para poder hacer facturas en la AFIP, la puta que los parió).

Incluso, cuando laburaba en Ericsson, que me obligaban a usar Windor, tenía un VMWare con un Ubuntu instalado (un Gutsy, o un Hardy, creo... cuanta agua bajo el puente!) que me servía para cuando tenía que hacer cosas serias a nivel de red, o para el caso cualquier cosa divertida.

Pero nunca había encontrado una forma piola de tener máquinas virtuales de Linux bajo Linux. Y con "piola" me refiero a que funcione bien y que sea relativamente fácil de armar.

Y acá entra LXC.

Linux container

Aunque LXC no es propiamente dicho una "máquina virtual" (es más bien un "entorno virtual"), igual permite la ejecución de un linux que no se mezcla a nivel de configuraciones ni de paquetes instalados ni de lo que uno puede romper del sistema con la máquina que uno tiene.

¿Para qué se puede usar? En mi caso lo uso mucho en el laburo, ya que mi máquina de desarrollo es un Ubuntu Quantal, pero los sistemas que corren en los servers son bajo Precise o Lucid (entonces tengo un container para cada uno). Y también los tengo pensado usar para probar instalaciones desde cero (por ejemplo, al armar un .deb por primera vez, probar de instalarlo en una máquina limpia).

¿Cómo se arma y usa un contenedor? Luego de instalar los paquetes necesarios (sudo apt-get install lxc libvirt-bin), la creación de un contenedor es bastaaaaante simple (de acá en adelante reemplazar en todos lados el "mi-lxc" por el nombre que ustedes quieran para el contenedor):

    sudo lxc-create -t ubuntu -n mi-lxc -- -r precise -a i386 -b $USER

Desmenucemos. El -t es el template a tomar, y el -n es para el nombre que le vamos a dar. A partir de ahí vemos un "--", lo que significa que el resto son opciones para el template propiamente dicho. En este caso, que use el release Precise, la arquitectura i386, y mi mismo usuario.

Lo maravilloso de esto es que el container, adentro, tiene mi usuario, porque el home es compartido! Y con esto todas las configuraciones de bash, vim, ssh, gnupg, etc, con lo cual "hacer cosas" adentro del lxc es directo, no hay que configurar todo (pero, al mismo tiempo, podemos "romper" el home desde adentro del container, ojo al piojo).

Para arrancar el container podemos hacer

    sudo lxc-start -n mi-lxc

Y esto nos va a dejar con el prompt listo para loguear, y acá alcanza con usar los propios usuario y password. Una vez adentro, usamos el container como si fuera una máquina nuevita.

Todo muy lindo, pero igual me gustan hacerle algunas configuraciones que hacen el uso aún más directo y sencillo. Y estas configuraciones, a nivel de sistema, son basicamente para que podamos entrar al container más fácil, y usar desde adentro aplicaciones gráficas.

Para entrar más fácil, tenemos que tener Avahi configurado. Más allá de instalarlo (sudo apt-get update; sudo apt-get install avahi-daemon), hay un detalle a toquetear.  Adentro del lxc, abrir el archivo /etc/avahi/avahi-daemon.conf y aumentar bastante el rlimit-nproc (por ejemplo, de 3 a 300).

Con esto ya estamos listos para entrar fácil al container. Lo podemos probar en otra terminal; hacer:

    ssh mi-lxc.local

Lindo, ¿no?. Pero también está bueno poder forwardear los eventos de X, así podemos levantar aplicaciones gráficas. Para eso tenemos que tocar lo siguiente en el host (o sea, no en el container sino en la máquina "real"): editar /var/lib/lxc/mi-lxc/fstab y agregarle la linea:

    /tmp/.X11-unix tmp/.X11-unix none bind

En el container, tenemos que estar seguros que /tmp/.X11-unix exista, y reiniciarlo luego de estas configuraciones.

También necesitamos setear DISPLAY. Esto yo lo mezclé en el .bashrc, poniendo algo también para que cuando entro por ssh me cambie el color del prompt (incluso, poniendo distintos colores a distintos containers). Lo que estoy usando es:

    if [ `hostname` = "mi-lxc" ]; then
        export PS1='\[\e[1;34m\]\u@\h:\w${text}$\[\e[m\] ';
        export DISPLAY=:0
    fi

Para terminar, entonces, les dejo los tres comandos que más uso en el día a día con los containers, más allá de la instalación en sí: levantar el container (fíjense el -d, que lo levanta como un demonio, total nos conectamos por ssh); conectarnos (fíjense el -A, para que forwardee la conexión del agente de autenticación); y finalmente apagar el container:

    sudo lxc-start -n mi-lxc -d
    ssh -A mi-lxc.local
    sudo lxc-stop -n mi-lxc

Que lo disfruten.

Read more
Prakash

With Windows 8 pushing a “touch-first” desktop interface—Microsoft’s words, not ours—and with Valve’s Steam on Linux beginning to bring much-needed games and popular attention to the oft-overlooked operating system, there’s never been a better time to take Linux out for a test drive.

Dipping your toes into the penguin-filled waters of the most popular open-source ecosystem is easy, and you don’t have to commit to switching outright to Linux. You can install it alongside your current Windows system, or even try it without installing anything at all.

Ubuntu is the most popular Linux distribution for desktop and laptop Linux users, so we’ll focus on Ubuntu throughout this guide. For the most part, Ubuntu just plain works. It sports a subtle interface that stays out of your way. It enjoys strong support from software developers (including Valve, since Steam on Linux only officially supports Ubuntu). And you can find tons of information online if you run into problems.

Read more.

Read more
brendandonegan

The inaugural online UDS (or vUDS as it’s becoming known) is underway. This brings with it a number of new challenges in terms of running a good session. Having sat in on a few sessions yesterday and been the session lead for several sessions at physical UDS’s going back nearly two years now, I thought I’d jot down a few tips on how to run a good session.

Prepare

Regardless of whether the session is physical or virtual, it’s always important to prepare. The purpose of a UDS session is to get feedback on some proposed plan of work (even if it is extremely nebulous at the time of the session). Past experience shows that sessions are always more productive where most of the plan is already fleshed out prior to the session and the session basically functions as a review/comments meeting. This depends on your particular case though, since the thing you are planning may not be possible to flesh out in a lot of detail without feedback. I personally find this is rarely the case though.

Be punctual

UDS runs on a tight schedule, at least in the physical version, although I don’t see any good reason why this should change for vUDS. Punctuality is therefore important not as good manners but from a practical point of view. You need to time to compose yourself, find notes and make sure everything is set up. For a physical UDS this would have been to check microphones are working and projectors are projecting. For a vUDS, in my brief experience, this means making sure everyone who needs to be is invited into the hangout and that the etherpad is up and the video feed is working on the session page.

Delegate

As the session lead it is your responsibility to run a good session, however it will be impossible for you to perform all the tasks required to achieve this on your own. Someone needs to be making notes on the Etherpad and someone needs to be monitoring IRC. You should also be looking out for questions yourself but since you may be concentrating on conveying information and answering other questions, you do need help with this.

Avoid going off track

Time is limited in a UDS session and you may have a lot of points to get through. Be wary of getting distracted from the point of the session and discussing things that may not be entirely relevant. Don’t be afraid to cut people short – if the question is important to them then you can follow up offline later.

Manage threads of communication

This one is quite vUDS specific, but especially now that audiovisual participation is limited, it is important that all of the conversation take place in one spot. Particularly for people who are catching up with the video streams later on. Don’t allow a parallel conversation to develop on IRC if possible. If someone asks a question in IRC, repeat it to the video audience and answer it in the hangout, not on IRC. If someone is talking a lot in IRC and even answering questions, do invite them into the hangout so that what they’re saying can be recorded. It may not be possible to avoid this entirely, but as session lead you need to do your best to mitigate it.

Follow up

Not so much a tip for running a good session, but for getting the best from a good session. Remember to read the notes from the session and rewatch the video so that you can use the feedback to adapt your plan and find places to follow up.

That’s all there is to say, I really hope this first virtual UDS goes very well and that sessions are productive for everyone involved.


Read more
Prakash

Hackable Lego Robot Runs Linux

The Lego Mindstorms EV3 is the first major revamp of the Lego Group’s programmable robot kit since 2006, and the first to run embedded Linux.

Unveiled at the CES Show in Las Vegas yesterday, with the first public demos starting today at the Kids Play Summit at the Venetian Hotel, the $350 robot is built around an upgraded “Intelligent Brick” computer. Lego swapped out the previous microcontroller for a 300MHz ARM9 processor capable of running new Linux-based firmware. As a result, the kids-oriented Mindstorms EV3 offers far more programmability than the NXT series, which was last updated in 2009, says Lego.

Read More.

Read more
Prakash

The team behind the Samba file, print, and authentication server suite for Microsoft Windows clients announced the release of Samba version 4 yesterday. This version includes significant new capabilities that offer an open source replacement for many enterprise infrastructure roles currently delivered exclusively by Microsoft software, including acting as a domain controller, providing SMB2.1 protocol support, delivering clustering, and offering a virtual filesystem (VFS) interface. It comes with Coverity security certification and easy upgrade scripts. The release notes include details of all changes.

Notably, this includes the first open source implementation of Microsoft’s Active Directory protocols; Samba previously only offered Windows NT domain controller functions. According to the press release, “Samba 4.0 provides everything needed to serve as an Active Directory Compatible Domain Controller for all versions of Microsoft Windows clients currently supported by Microsoft, including the recently released Windows 8.”

Samba 4 can join existing Active Directory domains and provides all necessary function to host a domain that can be joined by Microsoft Active Directory servers. It provides all the services needed by Microsoft Exchange, as well as opening up the possibility of fully open source alternatives to Exchange such as the OpenChange project.

Read More.

Read more
Prakash

While ARM is gaining a lot of momentum, the challenge with ARM until now was that every architecture is very different from different vendors and requires a separate kernel and entire OS stack.

With Linux Kernel 3.7, this has changed for the better.

ARM’s problem was that, unlike the x86 architecture, where one Linux kernel could run on almost any PC or server, almost every ARM system required its own customized Linux kernel. Now with 3.7, ARM architectures can use one single vanilla Linux kernel while keeping their special device sauce in device trees.

The end result is that ARM developers will be able to boot and run Linux on their devices and then worry about getting all the extras to work. This will save them, and the Linux kernel developers, a great deal of time and trouble.

Just as good for those ARM architects and programmers who are working on high-end, 64-bit ARM systems, Linux now supports 64-bit ARM processors. 64-bit ARM CPUs won’t ship until in commercial quantities until 2013. When they do arrive though programmers eager to try 64-bit ARM processors on servers will have Linux ready for them.

Read More.

Read more
Prakash

From PC World.

Ubuntu is a widely popular open-source Linux distribution with eight years of maturity under its belt, and more than 20 million users. Of the roughly 5 percent of desktop OSs accounted for by Linux, at least one survey suggests that about half are Ubuntu. (Windows, meanwhile, accounts for about 84 percent.)

The timing of this latest Ubuntu release couldn’t be better for Windows users faced with the paradigm-busting Windows 8 and the big decision of whether to take the plunge.

Initial uptake of Windows 8 has been unenthusiastic, according to reports, and a full 80 percent of businesses will never adopt it, Gartner predicts. As a result, Microsoft’s big gamble may be desktop Linux’s big opportunity.

So, now that Canonical has thrown down the gauntlet, let’s take a closer look at Ubuntu 12.10 to see how it compares with Windows 8 from a business user’s perspective.

 

Windows 8 Pro (x86) Ubuntu 12.10
License fee $39 to $69 upgrade Free
CPU architectures supported x86, x86-64 x86, x86-64, ARM, PPC
Minimum RAM 1GB, 2GB 512MB
Minimum hard-disk space 20GB 5GB
Concurrent multiuser support No Yes
Workspaces One Two or more
Virtualization Hyper-V KVM
License Not applicable GPL Open Source: Main, Non-GPL: Restricted
Productivity software included None LibreOffice
Graphics tools included No Yes

Read More.

Read more
Prakash

Over €10 million (approximately £8 million or $12.8 million) has been saved by the city of Munich, thanks to its development and use of the city’s own Linux platform. The calculation of savings follows a question by the city council’s independent Free Voters (Freie Wähler) group,

Read More.

Urge your city to save money from taxes, its your hard earned money.

 

Read more
Prakash

After installing Ubuntu 12.10, the first thing I wanted to do, was to disable reverse scrolling – you scroll down and it scrolls up! This is also called natural scrolling by Apple. Don’t know what is natural about it :) but may be natural for Apple users.

Open the terminal and edit this file using any editor and edit the .Xmodmap in your home directory for example:

 gedit .Xmodmap

Here you would seet his:

pointer = 1 2 3 5 4  6 7 8 9 10 11 12

You would note that in the sequence of numbers 5 and 4 are interchanged. Change it back to the sequence..

pointer = 1 2 3 4 5 6 7 8 9 10 11 12

Now you are done, logging out and in should do the job.

If you have Ubuntu Tweak installed. Just go to Tweaks-Miscellaneous and you would see an option to toggle Natural Scrolling on/off.

 

 

Read more
brendandonegan

I find that sometimes the Network Manager applet in Ubuntu can be a little temperamental (apologies to the maintainer, cyphermox, if he’s reading this – but such is the nature of software). Sometimes it won’t show available routers and if that’s the case then I’ve established a little workaround that I’m telling you about mainly because it involves a script I wrote that lives in a somewhat obscure place in Ubuntu.

Step one in the workaround is needed if you don’t know which networks are available in advance. If you’re sitting in your home then you’ll probably not need this step since most people know their router SSID. If you don’t then you can scan using:

nmcli dev wifi list

This is really reliable and always works if your WiFi hardware is working.

The second step is to use the SSID to create the connection using the script I wrote:

sudo /usr/share/checkbox/scripts/create_connection $SSID --security=wpa --key=$WPA_KEY

If the router doesn’t use any security (which nmcli dev wifi list will tell you) then you don’t need –security or –key. If the router doesn’t use WPA2 (maybe it uses WEP), then you’re out of luck – and deservedly so. Change the routers security settings if you can!


Read more

The Ubuntu Developer Summit was held in Copenhagen last week, to discuss plans for the next six-month cycle of Ubuntu. This was the most productive UDS that I've been to — maybe it was the shorter four-day schedule, or the overlap with Linaro Connect, but it sure felt like a whirlwind of activity.

I thought I'd share some details about some of the sessions that cover areas I'm working on at the moment. In no particular order:

Improving cross-compilation

Blueprint: foundations-r-improve-cross-compilation

This plan is a part of a mutli-cycle effort to improve cross-compilation support in Ubuntu. Progress is generally going well — the consensus from the session was that the components are fairly close to complete, but we still need some work to pull those parts together into something usable.

So, this cycle we'll be working on getting that done. While we have a few bugfixes and infrastructure updates to do, one significant part of this cycle’s work will be to document the “best-practices” for cross builds in Ubuntu, on wiki.ubuntu.com. This process will be heavily based on existing pages on the Linaro wiki. Because most of the support for cross-building is already done, the actual process for cross-building should be fairly straightforward, but needs to be defined somewhere.

I'll post an update when we have a working draft on the Ubuntu wiki, stay tuned for details.

Rapid archive bringup for new hardware

Blueprint: foundations-r-rapid-archive-bringup

I'd really like for there to be a way to get an Ubuntu archive built “from scratch”, to enable custom toolchain/libc/other system components to be built and tested. This is typically useful when bringing up new hardware, or testing rebuilds with new compiler settings. Because we may be dealing with new hardware, doing this bootstrap through cross-compilation is something we'd like too.

Eventually, it would be great to have something as straightforward as the OpenEmbedded or OpenWRT build process to construct a repository with a core set of Ubuntu packages (say, minbase), for previously-unsupported hardware.

The archive bootstrap process isn't done often, and can require a large amount of manual intervention. At present, there's only a couple of folks who know how to get it working. The plan here is to document the bootstrap process in this cycle, so that others can replicate the process, and possibly improve the bits that are a little too janky for general consumption.

ARM64 / ARMv8 / aarch64 support

Blueprint: foundations-r-aarch64

This session is an update for progress on the support for ARMv8 processors in Ubuntu. While no general-purpose hardware exists at the moment, we want to have all the pieces ready for when we start seeing initial implementations. Because we don't have hardware yet, this work has to be done in a cross-build environment; another reason to keep on with the foundations-r-improve-cross-compilation plan!

So far, toolchain progress is going well, with initial cross toolchains available for quantal.

Although kernel support isn’t urgent at the moment, we’ll be building an initial kernel-headers package for aarch64. There's also a plan to get a page listing the aarch64-cross build status of core packages, so we'll know what is blocked for 64-bit ARM enablement.

We’ve also got a bunch of workitems for volunteers to fix cross-build issues as they arise. If you're interested, add a workitem in the blueprint, and keep an eye on it for updates.

Secure boot support in Ubuntu

Blueprint: foundations-r-secure-boot

This session covered progress of secure boot support as at the 12.10 Quantal Quetzal release, items that are planned for 13.04, and backports for 12.04.2.

As for 12.10, we’ve got the significant components of secure boot support into the release — the signed boot chain. The one part that hasn't hit 12.10 yet is the certificate management & update infrastructure, but that is planned to reach 12.10 by way of a not-too-distant-future update.

The foundations team also mentioned that they were starting the 12.04.2 backport right after UDS, which will bring secure boot support to our current “Long Term Support” (LTS) release. Since the LTS release is often preferred Ubuntu preinstall situations, this may be used as a base for hardware enablement on secure boot machines. Combined with the certificate management tools (described at sbkeysync & maintaining uefi key databases), and the requirement for “custom mode” in general-purpose hardware, this will allow for user-defined trust configuration in an LTS release.

As for 13.04, we're planning to update the shim package to a more recent version, which will have Matthew Garrett's work on the Machine Owner Key plan from SuSE.

We're also planning to figure out support for signed kernel modules, for users who wish to verify all kernel-level code. Of course, this will mean some changes to things like DKMS, which run custom module builds outside of the normal Ubuntu packages.

Netboot with secure boot is still in progress, and will require some fixes to GRUB2.

And finally, the sbsigntools codebase could do with some new testcases, particularly for the PE/COFF parsing code. If you're interested in contributing, please contact me at jeremy.kerr@canonical.com.

Read more
Prakash

Ubuntu 12.10 is here. With this release there is no CD image only DVD image which is 800 MB in size. Torrent is preferred method for me.

Ubuntu 12.10
Torrent Links Direct Downloads
Ubuntu Desktop 64-Bit Edition Torrent Main Server
Ubuntu Desktop 32-Bit Edition Torrent Main Server
Ubuntu Server Edition 64-Bit Torrent Main Server
Ubuntu Server Edition 32-Bit Torrent Main Server

Have fun :)

Ubuntu Unleashed 2012 Edition: Covering 11.10 and 12.04 (7th Edition) (7th Edition)

Read more
sfmadmax

So I use Xchat daily and connect to a private IRC server to talk with my colleagues. I also have a BIP server in the office to record all of the IRC transcripts, this way I never miss any conversations regardless of the time of day. Because the BIP server is behind a firewall on the companies network I can’t access it from the outside.  For the past year I’ve been working around this by connecting to my companies firewall via ssh and creating a SOCKS tunnel then simply directing xchat to talk through my local SOCKS proxy.

To do this ,  open a terminal and issue:

ssh -CND <LOCAL_IP_ADDRESS>:<PORT> <USER>@<SSH HOST>

Ex: ssh -CND 192.168.1.44:9999 sfeole@companyfirewall.com

Starting ssh with -CND:

‘D’ Specifies a local “dynamic” application-level port forwarding. This works by allocating a socket to listen to port on the local side, optionally bound to the specified bind_address. It also adds compression to the datastream ‘C’ and the ‘N’ is a safeguard which protects the user from executing remote commands.

192.168.1.44 is my  IPv4 address

9999 is the local port i’m going to open and direct traffic through

After the SSH tunnel is open I now need to launch xchat, navigate to Settings -> Preferences -> Network Setup and configure xchat to use my local IP (192.168.1.44) and local port (9999) then press OK then Reconnect.

I should now be able to connect to the IRC server behind the firewall. Usually I run through this process a few times a day, so it becomes somewhat of a tedious annoyance after a while.

Recently I finished a cool python3 script that does all of this in quick command.

The following code will do the following:

1.) identify the ipv4 address of the interface device you specify

2.) configure xchat.conf to use the new ipv4 address and port specified by the user

3.) open the ssh tunnel using the SSH -CND command from above

4.) launch xchat and connect to your server (assuming you have it set to auto connect)

To use it simply run

$./xchat.py -i <interface> -p <port>

ex: $./xchat.py -i wlan0 -p 9999

the user can select wlan0 or eth0 and of course their desired port. When your done with the tunnel simply issue <Ctrl-C> to kill it and wala!

https://code.launchpad.net/~sfeole/+junk/xchat

#!/usr/bin/env python3
#Sean Feole 2012,
#
#xchat proxy wrapper, for those of you that are constantly on the go:
#   --------------  What does it do? ------------------
# Creates a SSH Tunnel to Proxy through and updates your xchat config
# so that the user does not need to muddle with program settings

import signal
import shutil
import sys
import subprocess
import argparse
import re
import time

proxyhost = "myhost.company.com"
proxyuser = "sfeole"
localusername = "sfeole"

def get_net_info(interface):
    """
    Obtains your IPv4 address
    """

    myaddress = subprocess.getoutput("/sbin/ifconfig %s" % interface)\
                .split("\n")[1].split()[1][5:]
    if myaddress == "CAST":
        print ("Please Confirm that your Network Device is Configured")
        sys.exit()
    else:
        return (myaddress)

def configure_xchat_config(Proxy_ipaddress, Proxy_port):
    """
    Reads your current xchat.conf and creates a new one in /tmp
    """

    in_file = open("/home/%s/.xchat2/xchat.conf" % localusername, "r")
    output_file = open("/tmp/xchat.conf", "w")
    for line in in_file.readlines():
        line = re.sub(r'net_proxy_host.+', 'net_proxy_host = %s'
                 % Proxy_ipaddress, line)
        line = re.sub(r'net_proxy_port.+', 'net_proxy_port = %s'
                 % Proxy_port, line)
        output_file.write(line)
    output_file.close()
    in_file.close()
    shutil.copy("/tmp/xchat.conf", "/home/%s/.xchat2/xchat.conf"
                 % localusername)

def ssh_proxy(ProxyAddress, ProxyPort, ProxyUser, ProxyHost):
    """
    Create SSH Tunnel and Launch Xchat
    """

    ssh_address = "%s:%i" % (ProxyAddress, ProxyPort)
    user_string = "%s@%s" % (ProxyUser, ProxyHost)
    ssh_open = subprocess.Popen(["/usr/bin/ssh", "-CND", ssh_address,
                 user_string], stdout=subprocess.PIPE, stdin=subprocess.PIPE)

    time.sleep(1)
    print ("")
    print ("Kill this tunnel with Ctrl-C")
    time.sleep(2)
    subprocess.call("xchat")
    stat = ssh_open.poll()
    while stat is None:
        stat = ssh_open.poll()

def main():
    """
    Core Code
    """

    parser = argparse.ArgumentParser()
    parser.add_argument('-i', '--interface',
                        help="Select the interface you wish to use",
                        choices=['eth0', 'wlan0'],
                        required=True)
    parser.add_argument('-p', '--port',
                        help="Select the internal port you wish to bind to",
                        required=True, type=int)
    args = parser.parse_args()

    proxyip = (get_net_info("%s" % args.interface))
    configure_xchat_config(proxyip, args.port)
    print (proxyip, args.port, proxyuser, proxyhost)

    ssh_proxy(proxyip, args.port, proxyuser, proxyhost)

if __name__ == "__main__":
    sys.exit(main())

Refer to the launchpad address above for more info.


Read more
Prakash

From the article:

 

“You’d be a fool to use anything but Linux.” :)

Most Linux people know that Google uses Linux on its desktops as well as its servers. Some know that Ubuntu Linux is Google’s desktop of choice and that it’s called Goobuntu. But almost no one outside of Google knew exactly what was in it or what roles Ubuntu Linux plays on Google’s campus, until now.

Read More.

Related posts:

  1. Microsoft, Google in open war in India Google and Microsoft, two of the world’s largest technology firms, are...
  2. Ubuntu 12.04 LTS is now available for Download Ubuntu 12.04 LTS is here. This is the first time...
  3. Ubuntu 11.10 is here Ubuntu 11.10, code named Oneiric Ocelot,  is now available. It...

Related posts brought to you by Yet Another Related Posts Plugin.

Read more
Prakash

Apple — one of the most closed companies in the world — is actually using lot of open source and software. Licensing information in the Apple iPhone proves this. Go to the legal section on the iPhone and it cites Linux Kernel developer Ted Ts’o for his code. Linux Suse is there, too.

Zemlin made the point that Apple has hundreds of billions of dollars in cash, which is enough to buy HP, Intel and Dell combined. Instead, Apple purchased the copyright to the Common Unix Printing System (CUPS), which now is on every Linux and Apple system.

The list of companies using Linux does not stop at Apple. Microsoft, which once equated open source with communism, is now a top contributor to the Linux Kernel project. And VMware is getting on the bandwagon.

Read More.

Related posts:

  1. Android Kernel and Linux kernel merge Android so far has been maintaining its separate kernel from...
  2. 8000 developer and 800 companies build Linux! Linux is today powering Android phones, TVs, set-top boxes, enterprise...
  3. Eight features Windows 8 borrowed from Linux “Good artists borrow, great artists steal!” — Pablo Picasso said...

Related posts brought to you by Yet Another Related Posts Plugin.

Read more

Most of the components of the 64-bit ARM toolchain have been released, so I've put together some details on building a cross compiler for aarch64. At present, this is only binutils & compiler (ie, no libc), so is probably not useful for applications. However, I have a 64-bit ARM kernel building without any trouble.

Update: looking for an easy way to install a cross-compiler on Ubuntu or debian? Check out aarch64 cross compiler packages for Ubuntu & Debian.

pre-built toolchain

If you're simply looking to download a cross compiler, here's one I've built earlier: aarch64-cross.tar.gz (.tar.gz, 85MB). It's built for an x86_64 build machine, using Ubuntu 12.04 LTS, but should work with other distributions too.

The toolchain is configured for paths in /opt/cross/. To install it, do a:

[jk@pecola ~]$ sudo mkdir /opt/cross
[jk@pecola ~]$ sudo chown $USER /opt/cross
[jk@pecola ~]$ tar -C /opt/cross/ -xf aarch64-x86_64-cross.tar.gz

If you'd like to build your own, here's how:

initial setup

We're going to be building in ~/build/aarch64-toolchain/, and installing into /opt/cross/aarch64/. If you'd prefer to use other paths, simply change these in the commands below.

[jk@pecola ~]$ mkdir -p ~/build/arm64-toolchain/
[jk@pecola ~]$ cd ~/build/arm64-toolchain/
[jk@pecola ~]$ prefix=/opt/cross/aarch64/

We'll also need a few packages for the build:

[jk@pecola ~]$ sudo apt-get install bison flex libmpfr-dev libmpc-dev texinfo

binutils

I have a git repository with a recent version of ARM's aarch64 support, plus a few minor updates at git://kernel.ubuntu.com/jk/arm64/binutils.git (or browse the gitweb view). To build:

Update: arm64 support has been merged into upstream binutils, so you can now use the official git repository. The commit 02b16151 builds successfully for me.

[jk@pecola arm64-toolchain]$ git clone git://gcc.gnu.org/git/binutils.git
[jk@pecola arm64-toolchain]$ cd binutils
[jk@pecola binutils]$ ./configure --prefix=$prefix --target=aarch64-none-linux
[jk@pecola binutils]$ make
[jk@pecola binutils]$ make install
[jk@pecola binutils]$ cd ..

kernel headers

Next up, the kernel headers. I'm using Catalin Marinas' kernel tree on kernel.org here. We don't need to do a full build (we don't have a compiler yet..), just the headers_install target.

[jk@pecola arm64-toolchain]$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64.git
[jk@pecola arm64-toolchain]$ cd linux-aarch64
[jk@pecola linux-aarch64]$ git reset --hard b6fe1645
[jk@pecola linux-aarch64]$ make ARCH=arm64 INSTALL_HDR_PATH=$prefix headers_install
[jk@pecola linux-aarch64]$ cd ..

gcc

And now we should have things ready for the compiler build. I have a git tree up at git://kernel.ubuntu.com/jk/arm64/gcc.git (gitweb), but this is just the aarch64 branch of upstream gcc.

[jk@pecola arm64-toolchain]$ git clone git://kernel.ubuntu.com/jk/arm64/gcc.git
[jk@pecola arm64-toolchain]$ cd gcc/aarch64-branch/
[jk@pecola aarch64-branch]$ git reset --hard d6a1e14b
[jk@pecola aarch64-branch]$ ./configure --prefix=$prefix \
    --target=aarch64-none-linux --enable-languages=c \
    --disable-threads --disable-shared --disable-libmudflap \
    --disable-libssp --disable-libgomp --disable-libquadmath
[jk@pecola aarch64-branch]$ make
[jk@pecola aarch64-branch]$ make install
[jk@pecola aarch64-branch]$ cd ../..

That's it! You should have a working compiler for arm64 kernels in /opt/cross/aarch64.

Read more
Steve George

Someone recently remarked to me that you can think of hardware as software that’s developed really slowly. While the software space has been going wild over cloud computing it’s been pretty quiet on the hardware side of the equation. But, that’s going to change as we see a new class of server hardware that helps businesses take advantage of the power and density savings possible through new CPU architectures and software stacks.

As an illustration IDC reported on the server market recently and it shows the start of the next wave of change. As you’d expect the general server market is pretty poor, growing at just 2.7%. But, blade servers which are commonly used for Web workloads is growing at 7%. Finally, the hyperdense form-factor is growing at 29% – which is an astounding amount.

In some ways the drivers for this change are just the continuation of a long-running story where everything is (has?) moved into a Web infrastructure set-up which enables the horizontal scaling of services. Implicitly this favours buying a lot of cheaper systems and building in redundancy at the software level. But the Cloud accelerates this trend further since it’s stateless and you no longer care about the specifics of the hardware layer in the same way.

The challenge for infrastructure managers is that continually adding more servers means you’re incurring ongoing costs for electricity, space and management. So anything that can drive better performance per watt in a denser arrangement is interesting. As you can see from the diagram below the expected growth in this space is really significant.

At a CPU architecture level ARM chips have been getting more powerful and this year they’re going to enter into the mix for servers. The first reason for this is that they’re relatively low-power which means lower running costs. Since they’re low power they also give off less heat so another advantage is they can be put into a ‘hyperdense’ arrangement that also saves money in terms of space. You’ll see systems this year from both Dell and HP (see Moonshot). It’s pretty astounding to think that the same chip that’s powering your phone could be powering Facebook!

If we’re truly going to get the benefit from the new hyperdense form-factor then the software layer will also need to reflect the capabilities of these systems. So for Ubuntu we’re continuing our work on ARM and recently announced the availability of 12.04 LTS as an ARM server – the first commercial Linux to come to the platform. We’re also exploring how these hardware systems unique strengths are expressed and how this impacts the software stack. For example if you’ve got a few hundred systems in a half-rack then the problem of managing those systems is far more significant – so service orchestration (such as Juju) is really critical. It’s exciting times in this space and a really interesting project.

If you’re interested in a quick summary of ARM server check out this Prezi by Victor Palau.


Tagged: arm chips, enterprise-it

Read more
Prakash

OpenStack has the potential to become as widely used in cloud computing as Linux in servers, according to Rackspace’s chief executive Lanham Napier.

Napier noted that OpenStack has more code contributors than Linux did when it started: it had 206 code contributors by its 84th week, whereas Linux took 615 weeks to get to that level. Similarly, OpenStack had 166 companies adding to it by its 84th week, whereas Linux reached 180 companies by its 828th week.

OpenStack is already well on the way to building that community, given the broad adoption the technology has seen since its launch two years ago. At the moment, more than 100 companies have put OpenStack into production, including AT&T, Korea Telecom, the San Diego Supercomputer Centre, HP and the US Department of Energy’s Argonne National Laboratory.

Read More.

Related posts:

  1. If AWS is the Walmart of cloud, is OpenStack the Soviet Union? The Cloud Faceoff! The stage was set for a lively...
  2. Why OpenStack is important OpenStack is the future of Cloud computing. Founded by NASA...
  3. How HP Cloud Will Differentiate from Amazon, Rackspace HP has now jumped on the bandwagon as a cloud...

Related posts brought to you by Yet Another Related Posts Plugin.

Read more