Canonical Voices

Posts tagged with 'proxy'

sfmadmax

So I use Xchat daily and connect to a private IRC server to talk with my colleagues. I also have a BIP server in the office to record all of the IRC transcripts, this way I never miss any conversations regardless of the time of day. Because the BIP server is behind a firewall on the companies network I can’t access it from the outside.  For the past year I’ve been working around this by connecting to my companies firewall via ssh and creating a SOCKS tunnel then simply directing xchat to talk through my local SOCKS proxy.

To do this ,  open a terminal and issue:

ssh -CND <LOCAL_IP_ADDRESS>:<PORT> <USER>@<SSH HOST>

Ex: ssh -CND 192.168.1.44:9999 sfeole@companyfirewall.com

Starting ssh with -CND:

‘D’ Specifies a local “dynamic” application-level port forwarding. This works by allocating a socket to listen to port on the local side, optionally bound to the specified bind_address. It also adds compression to the datastream ‘C’ and the ‘N’ is a safeguard which protects the user from executing remote commands.

192.168.1.44 is my  IPv4 address

9999 is the local port i’m going to open and direct traffic through

After the SSH tunnel is open I now need to launch xchat, navigate to Settings -> Preferences -> Network Setup and configure xchat to use my local IP (192.168.1.44) and local port (9999) then press OK then Reconnect.

I should now be able to connect to the IRC server behind the firewall. Usually I run through this process a few times a day, so it becomes somewhat of a tedious annoyance after a while.

Recently I finished a cool python3 script that does all of this in quick command.

The following code will do the following:

1.) identify the ipv4 address of the interface device you specify

2.) configure xchat.conf to use the new ipv4 address and port specified by the user

3.) open the ssh tunnel using the SSH -CND command from above

4.) launch xchat and connect to your server (assuming you have it set to auto connect)

To use it simply run

$./xchat.py -i <interface> -p <port>

ex: $./xchat.py -i wlan0 -p 9999

the user can select wlan0 or eth0 and of course their desired port. When your done with the tunnel simply issue <Ctrl-C> to kill it and wala!

https://code.launchpad.net/~sfeole/+junk/xchat

#!/usr/bin/env python3
#Sean Feole 2012,
#
#xchat proxy wrapper, for those of you that are constantly on the go:
#   --------------  What does it do? ------------------
# Creates a SSH Tunnel to Proxy through and updates your xchat config
# so that the user does not need to muddle with program settings

import signal
import shutil
import sys
import subprocess
import argparse
import re
import time

proxyhost = "myhost.company.com"
proxyuser = "sfeole"
localusername = "sfeole"

def get_net_info(interface):
    """
    Obtains your IPv4 address
    """

    myaddress = subprocess.getoutput("/sbin/ifconfig %s" % interface)\
                .split("\n")[1].split()[1][5:]
    if myaddress == "CAST":
        print ("Please Confirm that your Network Device is Configured")
        sys.exit()
    else:
        return (myaddress)

def configure_xchat_config(Proxy_ipaddress, Proxy_port):
    """
    Reads your current xchat.conf and creates a new one in /tmp
    """

    in_file = open("/home/%s/.xchat2/xchat.conf" % localusername, "r")
    output_file = open("/tmp/xchat.conf", "w")
    for line in in_file.readlines():
        line = re.sub(r'net_proxy_host.+', 'net_proxy_host = %s'
                 % Proxy_ipaddress, line)
        line = re.sub(r'net_proxy_port.+', 'net_proxy_port = %s'
                 % Proxy_port, line)
        output_file.write(line)
    output_file.close()
    in_file.close()
    shutil.copy("/tmp/xchat.conf", "/home/%s/.xchat2/xchat.conf"
                 % localusername)

def ssh_proxy(ProxyAddress, ProxyPort, ProxyUser, ProxyHost):
    """
    Create SSH Tunnel and Launch Xchat
    """

    ssh_address = "%s:%i" % (ProxyAddress, ProxyPort)
    user_string = "%s@%s" % (ProxyUser, ProxyHost)
    ssh_open = subprocess.Popen(["/usr/bin/ssh", "-CND", ssh_address,
                 user_string], stdout=subprocess.PIPE, stdin=subprocess.PIPE)

    time.sleep(1)
    print ("")
    print ("Kill this tunnel with Ctrl-C")
    time.sleep(2)
    subprocess.call("xchat")
    stat = ssh_open.poll()
    while stat is None:
        stat = ssh_open.poll()

def main():
    """
    Core Code
    """

    parser = argparse.ArgumentParser()
    parser.add_argument('-i', '--interface',
                        help="Select the interface you wish to use",
                        choices=['eth0', 'wlan0'],
                        required=True)
    parser.add_argument('-p', '--port',
                        help="Select the internal port you wish to bind to",
                        required=True, type=int)
    args = parser.parse_args()

    proxyip = (get_net_info("%s" % args.interface))
    configure_xchat_config(proxyip, args.port)
    print (proxyip, args.port, proxyuser, proxyhost)

    ssh_proxy(proxyip, args.port, proxyuser, proxyhost)

if __name__ == "__main__":
    sys.exit(main())

Refer to the launchpad address above for more info.


Read more
mandel

With GObject introspection is very simple to set the settings of your system trough python. Fist, lets use the command line to find out our current settings:

gsettings list-recursively org.gnome.system.proxy

The following script allows you to retrieve the http proxy settings that you are currently using:

from gi.repository import Gio
 
def get_settings():
    """Get proxy settings."""
    http_settings = Gio.Settings.new('org.gnome.system.proxy.http')
    host = http_settings.get_string('host')
    port = http_settings.get_int('port')
    if http_settings.get_boolean('use-authentication'):
        username = http_settings.get_string('authentication_user')
        password = http_settings.get_string('authentication_password')
    else:
        username = password = None
    return host, port, username, password

Setting them is as easy as getting them:

from gi.repository import Gio
 
def set_settings(host, port, username=None, password=None):
     """Set proxy settings."""
     http_settings = Gio.Settings.new('org.gnome.system.proxy.http')
     http_settings.set_string('host', host)
     http_settings.set_int('port', port)
     if username is not None:
         http_settings.set_boolean('use-authentication', True)
         http_settings.set_string('authentication_user', username)
         http_settings.set_string('authentication_password', password)

This is not utterly complicated but I’m notice that there are not many examples out there, so there you go. There is no code there that can be considered hard but I’d like to point out that if you use the get_value method from the Settings object you will have to call the appropriate get_* method from the returned GVariant, that is:

host = http_settings.get_string('host')

is equal to the following:

host = http_settings.get_value('host').get_string()

Read more
mandel

In the past days I have been working on implementing a python TestCase that can be used to perform integration tests to the future implementation of proxy support that will be landing in Ubuntu One. The idea of the TestCase is the following:

  • Start a proxy so that connections go throw it. The proxy has to be listening to two different ports, one in which auth is not required and a second one in which auth is required. At the moment the only supported proxy is Squid using base auth.
  • The test case should provide a way to access to the proxy details for subclasses to use.
  • The test case should integrate with the ubuntuone-dev-tools.

Initially, one of the major problems I had was to start squid in two different ports so that:

  • Port A accepts non-auth requests.
  • Port A rejects auth requests.
  • Port B accepts auth requests.

The idea is simple, if you use port A you should never auth while you must in port B, and example configuration of the ACLs and ports is the following:

auth_param basic casesensitive on
# Use a default auth using ncsa and the passed generated file.
auth_param basic program ${auth_process} ${auth_file}
#Recommended minimum configuration:
acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
#
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8	# RFC1918 possible internal network
acl localnet src 172.16.0.0/12	# RFC1918 possible internal network
acl localnet src 192.168.0.0/16	# RFC1918 possible internal network
#
acl SSL_ports port 443		# https
acl SSL_ports port 563		# snews
acl SSL_ports port 873		# rsync
acl Safe_ports port 80		# http
acl Safe_ports port 21		# ftp
acl Safe_ports port 443		# https
acl Safe_ports port 70		# gopher
acl Safe_ports port 210		# wais
acl Safe_ports port 1025-65535	# unregistered ports
acl Safe_ports port 280		# http-mgmt
acl Safe_ports port 488		# gss-http
acl Safe_ports port 591		# filemaker
acl Safe_ports port 777		# multiling http
acl Safe_ports port 631		# cups
acl Safe_ports port 873		# rsync
acl Safe_ports port 901		# SWAT
acl purge method PURGE
acl CONNECT method CONNECT

# make an acl for users that have auth
acl password proxy_auth REQUIRED myportname ${auth_port_number}
acl auth_port_connected myportname ${auth_port_number}
acl nonauth_port_connected myportname ${noauth_port_number}

# Settings used for the tests:
# Allow users connected to the nonauth port
# Allow users authenticated AND connected to the auth port
http_access allow nonauth_port_connected
http_access allow password

#Recommended minimum configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Only allow purge requests from localhost
http_access allow purge localhost
http_access deny purge
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
#http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

#Allow ICP queries from local networks only
icp_access allow localnet
icp_access deny all

# Squid normally listens to port 3128 but we are going to listento two
# different ports, one for auth one for nonauth.
http_port ${noauth_port_number}
http_port ${auth_port_number}

#We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

# Default cache settings.
cache_dir ufs ${spool_temp} 100 16 256

# access log settings
access_log ${squid_temp}/access.log squid

#Default cache stroe log
cache_store_log ${squid_temp}/store.log

#Default pid file name
pid_filename ${squid_temp}/squid.pid

#Default netdb file name:
netdb_filename ${spool_temp}/logs/netdb.state

#Suggested default:
refresh_pattern ^ftp:		1440	20%	10080
refresh_pattern ^gopher:	1440	0%	1440
refresh_pattern -i (/cgi-bin/|\?) 0	0%	0
refresh_pattern (Release|Packages(.gz)*)$	0	20%	2880
# example line deb packages
#refresh_pattern (\.deb|\.udeb)$   129600 100% 129600
refresh_pattern .		0	20%	4320

# Don't upgrade ShoutCast responses to HTTP
acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9]
upgrade_http0.9 deny shoutcast

# Apache mod_gzip and mod_deflate known to be broken so don't trust
# Apache to signal ETag correctly on such responses
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

extension_methods REPORT MERGE MKACTIVITY CHECKOUT

hosts_file /etc/hosts

# Leave coredumps in the first cache dir
coredump_dir ${spool_temp}

Once the above was achieved the code of the test case was quite simple for Ubuntu O, unfortunatly, it was not that issues in Ubuntu P because there we have squid3 which supports http 1.1 and keeps the proxy keeps the connection alive. The fact that the connection is kept alive means that the reactor has a selectable running because the proxy keep it there. In order to solve the issue I wrote the code so that the server could say that the connection timedout. Here is the code that does it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
# -*- coding: utf-8 -*-
#
# Copyright 2011 Canonical Ltd.
#
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 3, as published
# by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranties of
# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
# PURPOSE.  See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program.  If not, see <http://www.gnu.org/licenses/>.
"""Test the squid test case."""
import base64
 
from twisted.application import internet, service
from twisted.internet import defer, reactor
from twisted.web import client, error, http, resource, server
 
from ubuntuone.devtools.testcases.squid import SquidTestCase
 
 
SAMPLE_RESOURCE = "<p>Hello World!</p>"
SIMPLERESOURCE = "simpleresource"
THROWERROR = "throwerror"
UNAUTHORIZED = "unauthorized"
 
# ignore common twisted lint errors
# pylint: disable=C0103, W0212
 
 
class ProxyClientFactory(client.HTTPClientFactory):
    """Factory that supports proxy."""
 
    def __init__(self, proxy_url, proxy_port, url, headers=None):
        # we set the proxy details before the init because the parent __init__
        # calls setURL
        self.proxy_url = proxy_url
        self.proxy_port = proxy_port
        self.disconnected_d = defer.Deferred()
        client.HTTPClientFactory.__init__(self, url, headers=headers)
 
    def setURL(self, url):
        self.host = self.proxy_url
        self.port = self.proxy_port
        self.url = url
        self.path = url
 
    def clientConnectionLost(self, connector, reason, reconnecting=0):
        """Connection lost."""
        self.disconnected_d.callback(self)
 
 
class ProxyWebClient(object):
    """Provide useful web methods with proxy."""
 
    def __init__(self, proxy_url=None, proxy_port=None, username=None,
            password=None):
        """Create a new instance with the proxy settings."""
        self.proxy_url = proxy_url
        self.proxy_port = proxy_port
        self.username = username
        self.password = password
        self.factory = None
        self.connectors = []
 
    def _connect(self, url, contextFactory):
        """Perform the connection."""
        scheme, _, _, _ = client._parse(url)
        # pylint: disable=E1101
        if scheme == 'https':
            from twisted.internet import ssl
            if contextFactory is None:
                contextFactory = ssl.ClientContextFactory()
            self.connectors.append(reactor.connectSSL(self.proxy_url,
                                                      self.proxy_port,
                                                      self.factory,
                                                      contextFactory))
        else:
            self.connectors.append(reactor.connectTCP(self.proxy_url,
                                                      self.proxy_port,
                                                      self.factory))
            # pylint: enable=E1101
 
    def _process_auth_error(self, failure, url, contextFactory):
        """Process an auth failure."""
        failure.trap(error.Error)
        if failure.value.status == str(http.PROXY_AUTH_REQUIRED):
            # we try to get the page using the basic auth
            auth = base64.b64encode('%s:%s' % (self.username, self.password))
            auth_header = 'Basic ' + auth.strip()
            self.factory = ProxyClientFactory(self.proxy_url, self.proxy_port,
                            url, headers={'Proxy-Authorization': auth_header})
            self._connect(url, contextFactory)
            return self.factory.deferred
        else:
            return failure
 
    def get_page(self, url, contextFactory=None, *args, **kwargs):
        """Download a webpage as a string.
 
        This method relies on the twisted.web.client.getPage but adds and extra
        step. If there is an auth error the method will perform a second try
        so that the username and password are used.
        """
        self.factory = ProxyClientFactory(self.proxy_url, self.proxy_port, url,
                                          headers={'Connection': 'close'})
        self._connect(url, contextFactory)
        self.factory.deferred.addErrback(self._process_auth_error, url,
                                    contextFactory)
        return self.factory.deferred
 
    @defer.inlineCallbacks
    def shutdown(self):
        """Clean all connectors."""
        for connector in self.connectors:
            yield connector.disconnect()
        defer.returnValue(True)
 
 
class SimpleResource(resource.Resource):
    """A simple web resource."""
 
    def render_GET(self, request):
        """Make a bit of html out of these resource's
        content."""
        return SAMPLE_RESOURCE
 
 
class SaveHTTPChannel(http.HTTPChannel):
    """A save protocol to be used in tests."""
 
    protocolInstance = None
 
    def connectionMade(self):
        """Keep track of the given protocol."""
        SaveHTTPChannel.protocolInstance = self
        http.HTTPChannel.connectionMade(self)
 
 
class SaveSite(server.Site):
    """A site that let us know when it closed."""
 
    protocol = SaveHTTPChannel
 
    def __init__(self, *args, **kwargs):
        """Create a new instance."""
        server.Site.__init__(self, *args, **kwargs)
        # we disable the timeout in the tests, we will deal with it manually.
        self.timeOut = None
 
 
class MockWebServer(object):
    """A mock webserver for testing"""
 
    def __init__(self):
        """Start up this instance."""
        root = resource.Resource()
        root.putChild(SIMPLERESOURCE, SimpleResource())
 
        root.putChild(THROWERROR, resource.NoResource())
 
        unauthorized_resource = resource.ErrorPage(resource.http.UNAUTHORIZED,
                                                "Unauthorized", "Unauthorized")
        root.putChild(UNAUTHORIZED, unauthorized_resource)
 
        self.site = SaveSite(root)
        application = service.Application('web')
        self.service_collection = service.IServiceCollection(application)
        #pylint: disable=E1101
        self.tcpserver = internet.TCPServer(0, self.site)
        self.tcpserver.setServiceParent(self.service_collection)
        self.service_collection.startService()
 
    def get_url(self):
        """Build the url for this mock server."""
        #pylint: disable=W0212
        port_num = self.tcpserver._port.getHost().port
        return "http://localhost:%d/" % port_num
 
    @defer.inlineCallbacks
    def stop(self):
        """Shut it down."""
        #pylint: disable=E1101
        # make the connection time out so that is works with squid3 when
        # the connection is kept alive.
        if self.site.protocol.protocolInstance:
            self.site.protocol.protocolInstance.timeoutConnection()
        yield self.service_collection.stopService()
 
 
class ProxyTestCase(SquidTestCase):
    """A squid test with no auth proxy."""
 
    @defer.inlineCallbacks
    def setUp(self):
        """Set the tests."""
        yield super(ProxyTestCase, self).setUp()
        self.ws = MockWebServer()
        self.proxy_client = None
        self.addCleanup(self.teardown_client_server)
        self.url = self.ws.get_url() + SIMPLERESOURCE
 
    def teardown_client_server(self):
        """Clean resources."""
        if self.proxy_client is not None:
            self.proxy_client.shutdown()
            return defer.gatherResults([self.ws.stop(),
                               self.proxy_client.shutdown(),
                               self.proxy_client.factory.disconnected_d])
        else:
            return self.ws.stop()
 
    def access_noauth_url(self, address, port):
        """Access a url throught the proxy."""
        self.proxy_client = ProxyWebClient(proxy_url=address, proxy_port=port)
        return self.proxy_client.get_page(self.url)
 
    def access_auth_url(self, address, port, username, password):
        """Access a url throught the proxy."""
        self.proxy_client = ProxyWebClient(proxy_url=address, proxy_port=port,
                                         username=username, password=password)
        return self.proxy_client.get_page(self.url)
 
    @defer.inlineCallbacks
    def test_noauth_url_access(self):
        """Test accessing to the url."""
        settings = self.get_nonauth_proxy_settings()
        # if there is an exception we fail.
        data = yield self.access_noauth_url(settings['host'],
                                            settings['port'])
        self.assertEqual(SAMPLE_RESOURCE, data)
 
    @defer.inlineCallbacks
    def test_auth_url_access(self):
        """Test accessing to the url."""
        settings = self.get_auth_proxy_settings()
        # if there is an exception we fail.
        data = yield self.access_auth_url(settings['host'],
                                          settings['port'],
                                          settings['username'],
                                          settings['password'])
        self.assertEqual(SAMPLE_RESOURCE, data)
 
    def test_auth_url_401(self):
        """Test failing accessing the url."""
        settings = self.get_auth_proxy_settings()
        # swap password for username to fail
        d = self.failUnlessFailure(self.access_auth_url(settings['host'],
                                        settings['port'], settings['password'],
                                        settings['username']), error.Error)
        return d
 
    def test_auth_url_407(self):
        """Test failing accessing the url."""
        settings = self.get_auth_proxy_settings()
        d = self.failUnlessFailure(self.access_noauth_url(settings['host'],
                                   settings['port']), error.Error)
        return d

The above code is the tests for the test case and the important bits are:

133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
class SaveHTTPChannel(http.HTTPChannel):
    """A save protocol to be used in tests."""
 
    protocolInstance = None
 
    def connectionMade(self):
        """Keep track of the given protocol."""
        SaveHTTPChannel.protocolInstance = self
        http.HTTPChannel.connectionMade(self)
 
 
class SaveSite(server.Site):
    """A site that let us know when it closed."""
 
    protocol = SaveHTTPChannel
 
    def __init__(self, *args, **kwargs):
        """Create a new instance."""
        server.Site.__init__(self, *args, **kwargs)
        # we disable the timeout in the tests, we will deal with it manually.
        self.timeOut = None

The above defines a protocol that will know the instance that it was used so that we can trigger the time out in a clean up function.

190
191
        if self.site.protocol.protocolInstance:
            self.site.protocol.protocolInstance.timeoutConnection()

This tells the server to time out.

207
208
209
210
211
212
213
214
215
    def teardown_client_server(self):
        """Clean resources."""
        if self.proxy_client is not None:
            self.proxy_client.shutdown()
            return defer.gatherResults([self.ws.stop(),
                               self.proxy_client.shutdown(),
                               self.proxy_client.factory.disconnected_d])
        else:
            return self.ws.stop()

And the clean up function. That is all, now I guess I’ll move to add proxy support or ensure that the test case works on Windows, which is certainly going to be a diff issue.

Read more
mandel

The following is some code in which I have been working (and stupidly wasting time in a small error) that allows to get a page using a methos similar to twisted.web.client.getPage through a proxy that uses base auth.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
# -*- coding: utf-8 -*-
#
# Copyright 2011 Canonical Ltd.
#
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 3, as published
# by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranties of
# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
# PURPOSE.  See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program.  If not, see <http://www.gnu.org/licenses/>.
"""Test the squid test case."""
import base64
 
from twisted.internet import defer, reactor
from twisted.web import client, error, http
 
from ubuntuone.devtools.testcases.squid import SquidTestCase
 
# ignore common twisted lint errors
# pylint: disable=C0103, W0212
 
 
class ProxyClientFactory(client.HTTPClientFactory):
    """Factory that supports proxy."""
 
    def __init__(self, proxy_url, proxy_port, url, headers=None):
        self.proxy_url = proxy_url
        self.proxy_port = proxy_port
        client.HTTPClientFactory.__init__(self, url, headers=headers)
 
    def setURL(self, url):
        self.host = self.proxy_url
        self.port = self.proxy_port
        self.url = url
        self.path = url
 
 
class ProxyWebClient(object):
    """Provide useful web methods with proxy."""
 
    def __init__(self, proxy_url=None, proxy_port=None, username=None,
            password=None):
        """Create a new instance with the proxy settings."""
        self.proxy_url = proxy_url
        self.proxy_port = proxy_port
        self.username = username
        self.password = password
 
    def _process_auth_error(self, failure, url, contextFactory):
        """Process an auth failure."""
        # we try to get the page using the basic auth
        failure.trap(error.Error)
        if failure.value.status == str(http.PROXY_AUTH_REQUIRED):
            auth = base64.b64encode('%s:%s' % (self.username, self.password))
            auth_header = 'Basic ' + auth.strip()
            factory = ProxyClientFactory(self.proxy_url, self.proxy_port, url,
                    headers={'Proxy-Authorization': auth_header})
            # pylint: disable=E1101
            reactor.connectTCP(self.proxy_url, self.proxy_port, factory)
            # pylint: enable=E1101
            return factory.deferred
        else:
            return failure
 
    def get_page(self, url, contextFactory=None, *args, **kwargs):
        """Download a webpage as a string.
 
        This method relies on the twisted.web.client.getPage but adds and extra
        step. If there is an auth error the method will perform a second try
        so that the username and password are used.
        """
        scheme, _, _, _ = client._parse(url)
        factory = ProxyClientFactory(self.proxy_url, self.proxy_port, url)
        if scheme == 'https':
            from twisted.internet import ssl
            if contextFactory is None:
                contextFactory = ssl.ClientContextFactory()
            # pylint: disable=E1101
            reactor.connectSSL(self.proxy_url, self.proxy_port,
                               factory, contextFactory)
            # pylint: enable=E1101
        else:
            # pylint: disable=E1101
            reactor.connectTCP(self.proxy_url, self.proxy_port, factory)
            # pylint: enable=E1101
        factory.deferred.addErrback(self._process_auth_error, url,
                                    contextFactory)
        return factory.deferred

I hope that this helps anyone out there :)

Read more
mandel

At the moment we are working on providing support for proxy on Ubuntu One. In order to test this correctly I have been setting up a LAN in my office so that I can test as many scenarion as possible. On of those scenarios is the one in which the auth if the proxy uses Active Directory.

Because I use bind9 to set one of my boxed for the DNS I had to dig out how to configure it to work with AD. In order to do that I did the following:

  1. Edited named.conf.local to add a subdomain for the AD machine:

    zone "ad.example.com" {
            type master;
            file "/etc/bind/db.ad.example.com";
            allow-update { 192.168.1.103; };
    };
    
  2. Configured the subzone to work with AD.

    ; BIND data file for local loopback interface
    ;
    $TTL    604800
    @       IN      SOA     ad.example.com. root.ad.example.com. (
                                  2         ; Serial
                             604800         ; Refresh
                              86400         ; Retry
                            2419200         ; Expire
                             604800 )       ; Negative Cache TTL
    ;
    @       IN      NS      ad.marvel.
    @       IN      A       127.0.0.1
    @       IN      AAAA    ::1
    ;
    ; AD horrible domains
    ;
    dc1.ad.example.com.    A       192.168.1.103
    _ldap._tcp.ad.example.com.     SRV     0 0 389  dc1.ad.example.com.
    _kerberos._tcp.ad.example.com.    SRV     0 0 88   dc1.ad.example.com.
    _ldap._tcp.dc._msdcs.ad.example.com.   SRV     0 0 389  dc1.ad.example.com.
    _kerberos._tcp.dc._msdcs.ad.example.com.    SRV     0 0 88   dc1.ad.example.com.
    gc._msdcs.ad.example.com.      SRV     0 0 3268 dc1.ad.example.com.
    

    Note:Is important to remember that the computer name of the server that has the AD role is dc1, if we used a diff name we have to change the configuration accordingly.

  3. Restart the bind9 service:

    sudo /etc/init.d/bind9 restart
    
  4. Install the AD server and specify that you DO NOT want to set that server as a DNS server too.
  5. Set the AD server to use your Ubuntu with your bind9 as the DNS server.

There are lots of things missing if you wanted to use this a set up for a corporate network, but it does the trick in my LAN since I do not have AD duplication or other fancy things. Maybe is useful for you home, who knows..

Read more