— pissing into the wind


I’ve got step-ca setup in my homelab for a private ACME server for certificate renewal automation. It’s been working fantastically well with Traefik in front of all my Docker containers. I’ve been meaning to automate the certificate renewal on my pihole and finally got around to doing it this week.

I’m going to assume you’ve already setup step-ca or some other acme server. This procedure could also be used for LetsEncrypt. You just wouldn’t use the –server switch when issuing the certificate.

username: latte
acme ca: acme.guammie.lan
pihole: pihole.guammie.lan

# Add user to www-data in preparation for webroot mode
sudo usermod -a -G www-data latte

# logoff for the group modification to take effect and validate
id latte

# Allow the user to restart lighttpd without having to reauthenticate
# this is necessary for the reloadcmd to work in the cron job
echo "latte ALL=(ALL) NOPASSWD: /usr/sbin/service lighttpd restart" | sudo tee -a /etc/sudoers.d/latte-nopasswd

# My pihole isn't setup for internal name resolution at the OS level
# adding an entry for the CA
sudo sh -c 'echo " acme.guammie.lan" >> /etc/hosts'

# Setup the directory structure for where acme.sh will download certificates to
sudo mkdir -p /etc/pki/certs /etc/pki/keys /etc/pki/fullchain
sudo chown -R latte:www-data /etc/pki

# You need the CA root certificate for this.  Download it and put it somewhere.
cp root_ca.crt /etc/pki/certs/

# Clone acme.sh
git clone https://github.com/Neilpang/acme.sh.git

# I initially did this in standalone mode and needed socat... 
# don't know if it's necessary for webroot mode.  Never bothered to check.
sudo apt install socat
sudo setcap 'cap_net_bind_service=+ep' /usr/bin/socat

# Getting ready to install acme.sh in /opt
sudo mkdir /opt/acme.sh
sudo chown -R latte:latte /opt/acme.sh

# Install acme.sh to /opt
cd acme.sh
./acme.sh --install --home /opt/acme.sh --config-home /opt/acme.sh --cert-home /etc/pki/certs

# If you skip the logging out and back then everything still goes to ~/.acme.sh
# the installer even says to do so: 
# "OK, Close and reopen your terminal to start using acme.sh"
log back in (duh)

# Get the certificates
cd /opt/acme.sh 
./acme.sh --issue --webroot /var/www/html -d $HOSTNAME.guammie.lan --server https://acme.guammie.lan/acme/guammie/directory --ca-bundle /etc/pki/certs/root_ca.crt --days 7

# Install the certificates to the previously setup directories
./acme.sh --install-cert --domain $HOSTNAME.guammie.lan --cert-file /etc/pki/certs/$HOSTNAME.guammie.lan.cer --key-file /etc/pki/keys/$HOSTNAME.guammie.lan.key --fullchain-file /etc/pki/fullchain/$HOSTNAME.guammie.lan.crt --reloadcmd "sudo service lighttpd restart"

# Even though we haven't setup lighttpd to use the certificates yet, it's
# important to specify the reloadcmd because it becomes part of the renewal cron
# job that gets setup during install

# I'm putting this here just to remember to do it periodically
# I should setup a cron job for it
./acme.sh --upgrade --auto-upgrade

# Now modify your /etc/lighttpd/external.conf file
sudo nano /etc/lighttpd/external.conf                                                                                                          

# File contents of external.conf
$HTTP["host"] == "pihole.guammie.lan" { 
  # Ensure the Pi-hole Block Page knows that this is not a blocked domain
  setenv.add-environment = ("fqdn" => "true")

  # Enable the SSL engine with a LE cert, only for this specific host
  $SERVER["socket"] == ":443" {
    ssl.engine = "enable"
    ssl.pemfile = "/etc/pki/certs//pihole.guammie.lan.cer"
    ssl.privkey = "/etc/pki/keys/pihole.guammie.lan.key"
    ssl.honor-cipher-order = "enable"
    ssl.cipher-list = "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"
    ssl.openssl.ssl-conf-cmd = ("Protocol" => "-TLSv1.1, -TLSv1, -SSLv3")

  # Redirect HTTP to HTTPS
  $HTTP["scheme"] == "http" {
    $HTTP["host"] =~ ".*" {
      url.redirect = (".*" => "https://%0$0")

That’s it.

Read More

It took a few minutes of googling around to find instructions for this that work, so I’m going to put some very specific ones here for my future reference.

I’ve been attaching SSDs to my RPi 4’s and finding good cables has turned out to be more of a challenge because of lack of proper UASP and Trim support for the controllers in Linux. The majority of the difficulty is finding info on what controller is being used by the adapter and then determining if it uses the UAS driver AND support Trim. I ended up going with these cables from Startech (they’re half the price if you order on Amazon). There’s a firmware update available that adds Trim support.

Once the firmware is updated, there’s just a little bit of config to be done in Linux:

Identify the vendor (174c) and product (55aa) IDs:

[email protected]:~$ lsusb
Bus 002 Device 002: ID 174c:55aa ASMedia Technology Inc. Name: ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge, ASM1153E SATA 6Gb/s bridge

Create a udev rule to change the provisioning mode to “unmap”:

echo 'ACTION=="add|change", ATTRS{idVendor}=="174c", ATTRS{idProduct}=="55aa", SUBSYSTEM=="scsi_disk", ATTR{provisioning_mode}="unmap"' | sudo tee -a /etc/udev/rules.d/50-usb-ssd-trim.rules

Enable the Trim timer service:

sudo systemctl enable fstrim.timer

I reboot after this and do some validation checks:

UAS driver:

[email protected]:~$ lsusb -t
/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/4p, 5000M
|__ Port 2: Dev 2, If 0, Class=Mass Storage, Driver=uas, 5000M

Trim works:

[email protected]:~$ sudo fstrim -v /
/: 187.4 MiB (196468736 bytes) trimmed

That’s it.

Read More

Recently noticed time skew across my workstations and servers at home and put together a Stratum-1 NTP server for the local network using the Adafruit Ultimate GPS hat and an RPi 4. I’ll post the write up later. In the meantime, here are the commands I’m using to point all the rest of my RPis at the NTP servers for the local network:

sudo timedatectl set-timezone America/Chicago
sudo timedatectl set-ntp true
sudo bash -c 'echo "NTP=tick.guammie.localtock.guammie.local" >> /etc/systemd/timesyncd.conf'
sudo systemctl restart systemd-timesyncd

You can check and validate with these commands:

timedatectl timesync-status
timedatectl show-timesync
systemctl status systemd-timesyncd

Read More

I’ve been a longtime fan of Windows Live Writer for many years.  Alas, it has been unsupported for many moons and I haven’t been able to get it working with SSL.  The good news is that Microsoft decided to release WLW to the open source community.  The even better news is that someone has forked the code and taken up the mantle.  If you’re an existing Windows Live Writer, I suggest you give Open Live Writer a try.  The setup and user interface will be familiar and things seem to work overall.

Read More

I posted this on the freenas forums..

Here’s a short write-up on how I got SSL going with LDAPS against AD for authentication. I used the plugin and am working out of / in the jail.
keytool is located at /usr/pbi/subsonic-amd64/bin
1) Create a cnf file to be used for generating the csr.

[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
req_extensions = v3_req
x509_extensions = v3_req
[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_default = US
countryName_min = 2
countryName_max = 2
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = Texas
localityName = Locality Name (eg, city)
localityName_default = San Antonio
0.organizationName = Organization Name (eg, company)
0.organizationName_default = Company
organizationalUnitName = Organizational Unit Name (eg, section)
organizationalUnitName_default = Department
commonName = Common Name (hostname)
commonName_default = subsonic
commonName_max = 64
emailAddress = Email Address
emailAddress_default = [email protected]
emailAddress_max = 64
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
[email protected]_names
DNS.1 = subsonic
DNS.2 = subsonic.domain.com
IP.1 =

2) Generate the csr and private key

openssl req -new -sha256 -out subsonic.csr -config subsonic.cnf -newkey rsa:2048 -nodes -keyout subsonic.key

3) Submit the CSR to your CA. I used a Windows CA and received the subsonic.cer certificate.
4) Generate a PKCS12 file to be used for the Web SSL Java Keystore. I could not get this working using the sytem keystore, so this one is just for https.

openssl pkcs12 -export -out subsonic.pfx -inkey subsonic.key -in subsonic.cer -certfile CA-Certificate.cer

5) Create the Java Keystore to be used for SSL access.

./keytool -importkeystore -srckeystore subsonic.pfx -destkeystore subsonic.keystore -srcstoretype PKCS12 -srcalias 1 -destalias subsonic.domain.com

6) Add your CA certificate to the system Java Keystore as well. This will be used for LDAPS authentication. The default password is ‘changeit’ You should probably change that as well.

./keytool -import -trustcacerts -alias CA-domain.com -file /CA-Certificate.cer -keystore /usr/pbi/subsonic-amd64/openjdk7/jre/lib/security/cacerts

7) Enable LDAP Authentcation under Settings\Advanced

LDAP URL: ldaps://server.domain.com:636/dc=domain,dc=com
LDAP search filter: (&(sAMAccountName={0})(&(objectCategory=user)(memberof=cn=subsonic,ou=groups,dc=domain,dc=com)))
LDAP Manager: DOMAIN\user (non privileged!)

8) The default user cache is too high. Edit it in /var/db/subsonic/jetty/4427/webapp/WEB-INF/classes/ehcache.xml

<cache name="userCache"

Read More

i’m currently working on a wireless deployment with a requirement to use mac filtering.  There are over 600 laptops being deployed to a unique location per laptop.  Part of the imaging process doesan ipconfig and dumps the output to a text file which I can then use to copy/paste the hostname and mac into the Cisco 8510 wireless controller.  I’m lazy, so I made a bash script to parse the ipconfig text files. I wish I knew how to do this in Windows, but I work with what I got. The script takes this input from a text file:

Windows IP Configuration

   Host Name . . . . . . . . . . . . : GU0123LT01
   Primary Dns Suffix  . . . . . . . : guammie.com
   Node Type . . . . . . . . . . . . : Hybrid
   IP Routing Enabled. . . . . . . . : No
   WINS Proxy Enabled. . . . . . . . : No
   DNS Suffix Search List. . . . . . : guammie.com

Wireless LAN adapter Wireless Network Connection:

   Media State . . . . . . . . . . . : Media disconnected
   Connection-specific DNS Suffix  . :
   Description . . . . . . . . . . . : Ralink RT5390R 802.11b/g/n 1×1 Wi-Fi Adapter
   Physical Address. . . . . . . . . : B8-76-3F-25-34-4D
   DHCP Enabled. . . . . . . . . . . : Yes
   Autoconfiguration Enabled . . . . : Yes

Ethernet adapter Local Area Connection:

   Connection-specific DNS Suffix  . :
   Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller
   Physical Address. . . . . . . . . : B4-B5-2F-8D-BF-2B
   DHCP Enabled. . . . . . . . . . . : No
   Autoconfiguration Enabled . . . . : Yes
   IPv4 Address. . . . . . . . . . . :
   Subnet Mask . . . . . . . . . . . :
   Default Gateway . . . . . . . . . :
   DNS Servers . . . . . . . . . . . :
   NetBIOS over Tcpip. . . . . . . . : Enabled

And generates this command line that I can just copy/paste into the controller:

config macfilter add B8:76:3F:25:34:4D 18 guunit-clients "unit 0123 laptop"

Here’s the script.  It’s not the cleanest, but it works:

for f in $FILES
  # take action on each file. $f store current file name
  hostname="$(awk ‘/Host Name/ {c=1}c–>0’ $f | sed -n ‘/\<Host Name\>/ s/.*[[:space:]]\([[:alnum:]]\+\)$/\1/p’ | awk ‘{print substr($0,3,4)}’)"
  mac="$(awk ‘/Ralink RT5390R/ {c=1;next}c–>0’ $f | awk -F ‘Physical Address. . . . . . . . . : ‘ ‘{print $2}’ | sed ‘s/\-/\:/g’)"

echo "config macfilter add $mac 18 guunit-clients \"unit $hostname laptop\""


That’s it.

Read More

1)  Get the latest PBIS Open Edition from BeyondTrust (formerly Likewise): http://download1.beyondtrust.com/Technical-Support/Downloads/PowerBroker-Identity-Services-Open-Edition/?Pass=True

2)  chmod 755 the file, execute it, then install it.

chmod 755 pbis-open-


cd pbis-open-


3)  Join the domain

sudo domainjoin-cli join guammie.com administrator

4)  Add domain group to sudoers

sudo visudo

%GUAMMIE\\domain^admins ALL=(ALL) ALL

5)  Make domain logins use Bash (or whatever shell you want), refresh lss, and clear ad cache

sudo /opt/likewise/bin/lwregshell set_value ‘[HKEY_THIS_MACHINE\Services\lsass\Parameters\Providers\ActiveDirectory]’ LoginShellTemplate /bin/bash
sudo /opt/likewise/bin/lwregshell set_value ‘[HKEY_THIS_MACHINE\Services\lsass\Parameters\Providers\Local]’ LoginShellTemplate /bin/bash
sudo /opt/likewise/bin/lwsm refresh lsass
sudo /opt/likewise/bin/lw-ad-cache –delete-all

That’s it.

Read More

I always forget this when I need it most and there are 10000 entries on Google with the wrong info. To add an DNS server in Ubuntu server, edit the following file as you would a resolv.conf file: /etc/resolvconf/resolv.conf.d/base. Any entries manually added to /etc/resolv.conf get erased when networking is restarted.

Read More

I don’t know why, but I’ve been thinking about putting a proxy on my home network. Actually I do know why. It was because I tried to replace my SSG5 with a stupid ASA 5505 and wanted the web filter and inline AV scanning capability back. So I began building out a Squid server to use. The topology I had in mind would look like this:


Using WCCP or policy-based routing, I would send HTTP traffic from clients in the Trust/inside zone to the proxy server on the DMZ zone and do any content filtering and AV scanning on that box. Before I go any further let me say that this project pushed me over the edge to ripping out that damn ASA. I’ve been trying to like the ASA platform for a couple months, but things that just worked on ScreenOS are either impossible or make me feel like I’m doing something dirty when I implement. On the ASAs, you can’t use WCCP to point to a proxy in another zone. That means the Squid box would reside in the Trust/inside zone. This is fine at home, but not in a business, so there’d be no point in implementing this way, as I’ll never use it anywhere else. Ok… so let’s use the old PBR way… read documentation… what’s this?  Policy-based routing is not supported on the ASA platform? Bleh. I ripped out the ASA and put the SSG back in. Now I have OSPF routing through hostname based (yes, it works with dynamic addresses) VPN again as well. I also have the content filtering and inline AV scanning back. So why am I doing this again? I figure I may as well just get the transparent proxy going for kicks.

Here are the steps to get Squid 3.1.19 working on a CentOS 6.2 ESXi 4.1 Build 582267 VM using a minimal install. I’m going to assume you’ve done no configuration during the install and installed no other packages. This is all command line.

Ubuntu has another LTS release coming out in a couple months, so I didn’t want to to use the old 10.04 release. I’ve been thinking about CentOS lately just to stay familiar with RHEL/CentOS since a lot of businesses use it. Lucky for me, CentOS did a 6.2 release back in December 2011. I pulled down the ISOs and did the install.

1. Get network connectivity

vi /etc/sysconfig/network-scripts/ifcfg-eth0

Do the following to your config file:

ONBOOT="yes"                                #change to yes
BOOTPROTO=none                        #change to dhcp and stop here if dynamic; else add rest
IPADDR=                   #add
PREFIX=24                                        #add
GATEWAY=                  #add
DNS1=                   #add
DNS2=                   #add
DOMAIN=guammie.com                #add
DEFROUTE=yes                              #add

Save the file and service network restart

2. Update the system

yum update

Install whatever comes up and reboot

3. Install all the packages (and their dependencies) we’ll need for this project and some other useful things not included in the base install

yum install gcc perl vim-enhanced mlocate wget make gcc-c++ libstdc++-devel cyrus-sasl-devel libcap-devel openssl-devel openssl-static openldap-devel pam-devel db4-devel db4-cxx ntp ntpdate

4. Install VMware Tools

Run the tools install in the vSphere client

mkdir /media/dvd
mount /dev/dvd /media/dvd
tar -xzf VMwareTools-8.3.12-559003.tar.gz -C /root/
perl /root/vmware-tools-distrib/vmware-install.pl
run through the installer (pretty much hit return a bunch of times)

after the installer finishes:
umount /media/dvd
vim /etc/init.d/vmware-tools
add the following so that the first 3 lines look like this and then save:
# chkconfig: 345 97 13

chkconfig vmware-tools on

5. Install Squid from repository

I do this because I’m a lazy bastard. Installing from repository creates everything you need (users, startup scripts, etc) with a single command.

yum install squid

6. Upgrade Squid from source

So now you’ve got Squid 3.1.10 installed (as of 20120215). Latest version right now is 3.1.19. Let’s upgrade!

squid –v

Note those build options. We’re going to mostly use them.

wget http://www.squid-cache.org/Versions/v3/3.1/squid-3.1.19.tar.gz
tar -xzf squid-3.1.19.tar.gz

cd squid-3.1.19

./configure –build=x86_64-redhat-linux-gnu –host=x86_64-redhat-linux-gnu –target=x86_64-redhat-linux-gnu –program-prefix= –prefix=/usr –exec-prefix=/usr –bindir=/usr/bin –sbindir=/usr/sbin –sysconfdir=/etc –datadir=/usr/share –includedir=/usr/include –libdir=/usr/lib64 –libexecdir=/usr/libexec –sharedstatedir=/var/lib –mandir=/usr/share/man –infodir=/usr/share/info –exec_prefix=/usr –libexecdir=/usr/lib64/squid –localstatedir=/var –datadir=/usr/share/squid –sysconfdir=/etc/squid –with-logdir=/var/log/squid –with-pidfile=/var/run/squid.pid –disable-dependency-tracking –enable-arp-acl –enable-follow-x-forwarded-for –enable-auth=basic,digest,ntlm,negotiate –enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL,DB,POP3,squid_radius_auth –enable-ntlm-auth-helpers=smb_lm,no_check,fakeauth –enable-digest-auth-helpers=password,ldap,eDirectory –enable-negotiate-auth-helpers=squid_kerb_auth –enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group –enable-cache-digests –enable-cachemgr-hostname=localhost –enable-delay-pools –enable-epoll –enable-icap-client –enable-ident-lookups –enable-linux-netfilter –enable-referer-log –enable-removal-policies=heap,lru –enable-snmp –enable-ssl –enable-storeio=aufs,diskd,ufs –enable-useragent-log –enable-wccpv2 –enable-esi –with-aio –with-default-user=squid –with-filedescriptors=16384 –with-dl –with-openssl –with-pthreads build_alias=x86_64-redhat-linux-gnu host_alias=x86_64-redhat-linux-gnu target_alias=x86_64-redhat-linux-gnu CFLAGS=’-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector –param=ssp-buffer-size=4 -m64 -mtune=generic -fpie -fpic’ LDFLAGS=’-fPIC -pie -z relro -z now -fstack-protector’ CXXFLAGS=’-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector –param=ssp-buffer-size=4 -m64 -mtune=generic -fpie’ –with-squid=/builddir/build/BUILD/squid-3.1.19

make && make install

chkconfig squid on

7. Now we need to adjust IPTables

iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 80 -j REDIRECT –to-port 3128
iptables -I INPUT 4 -p tcp –dport 3128 -m state –state NEW,ESTABLISHED -j ACCEPT
iptables -I INPUT 4 -p udp –dport 3401 -j ACCEPT
service iptables save

The first rule is necessary for the transparent redirection. The second rule is necessary just to connect at all. The third rule is for SNMP if you plan on monitoring Squid itself.

8. Edit /etc/squid/squid.conf

Here’s mine. I’ve done things like setup the HTTP intercept, added SNMP support, and some other stuff about caching Windows Updates. Other than that it’s somewhat stock.

vim /etc/squid/squid.conf

acl manager proto cache_object
acl localhost src ::1
acl to_localhost dst ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src    # RFC1918 possible internal network
acl localnet src    # RFC1918 possible internal network
acl localnet src    # RFC1918 possible internal network
acl localnet src fc00::/7       # RFC 4193 local private network range
acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80        # http
acl Safe_ports port 21        # ftp
acl Safe_ports port 443        # https
acl Safe_ports port 70        # gopher
acl Safe_ports port 210        # wais
acl Safe_ports port 1025-65535    # unregistered ports
acl Safe_ports port 280        # http-mgmt
acl Safe_ports port 488        # gss-http
acl Safe_ports port 591        # filemaker
acl Safe_ports port 777        # multiling http

acl monitor src
acl snmp snmp_community guammie

visible_hostname squid.guammie.com

# Recommended minimum Access Permission configuration:
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost


# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
snmp_access allow snmp monitor
snmp_access deny all

# And finally deny all other access to this proxy
http_access deny all

# Pull entire files from the start when a range is requested; for Windows Updates
range_offset_limit -1

# Google what this does.  I’m too lazy to type it all out, but has to do with Windows Updates
quick_abort_min -1

# This removes proxy info from UserAgent
#via off

# Uncomment request_header and then one of the following header_replace lines to present either IE or Firefox
#request_header_access User-Agent deny all
#header_replace User-Agent Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; OfficeLiveConnector.1.4; OfficeLivePatch.1.3)
#header_replace User-Agnet Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv: Gecko/20100914 Firefox/3.6.10

# Squid normally listens to port 3128
http_port 3128 intercept

# SNMP port; 3401 is the official port
snmp_port 3401

# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

# Store large objects
maximum_object_size 200 MB

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /var/spool/squid 4096 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

refresh_pattern -i download.windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip) 4320 80% 43200 reload-into-ims
refresh_pattern -i download.microsoft.com/.*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip) 4320 80% 43200 reload-into-ims
refresh_pattern -i update.microsoft.com/.*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip) 4320 80% 43200 reload-into-ims
refresh_pattern -i windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip) 4320 80% 43200 reload-into-ims
refresh_pattern -i windowsupdate.microsoft.com/.*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip) 4320 80% 43200 reload-into-ims
refresh_pattern -i ntservicepack.microsoft.com/.*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip) 4320 80% 43200 reload-into-ims
refresh_pattern -i wustat.windows.com/.*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip) 4320 80% 43200 reload-into-ims
refresh_pattern -i microsoft.com/.*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip) 4320 80% 43200 reload-into-ims


# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:        1440    20%    10080
refresh_pattern ^gopher:    1440    0%    1440
refresh_pattern -i (/cgi-bin/|\?) 0    0%    0
refresh_pattern .        0    20%    4320

9. Start Squid

service squid start

That’s it.



Read More

I made a post over at http://ubuntuforums.org/showthread.php?t=1609521&highlight=ventrilo, but here’s a startup script I put together for Ventrilo 3.0.

# Provides:          ventrilo_srv
# Required-Start:    $network $remote_fs $syslog
# Required-Stop:     $network $remote_fs $syslog
# Should-Start:      $named
# Should-Stop:       $named
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Ventrilo version 3.0

DESC=”Ventrilo 3.0″
DAEMON_ARGS=”-f/usr/local/ventrilo/ventrilo_srv -d”

do_start() {

        start-stop-daemon –quiet –start \
                –user $VENT_USER \
                –chuid $VENT_USER \
                –pidfile $PIDFILE \
                –exec $DAEMON — $DAEMON_ARGS < /dev/null
        return $?


do_stop() {
        start-stop-daemon –stop –quiet \
                –retry=TERM/30/KILL/5 \
                –pidfile $PIDFILE \
                –name $NAME
        rm -f $PIDFILE
        return “$?”

case “$1” in


        sleep 10

        echo “Usage: $0 start|stop|restart|reload|force-reload”
        exit 1

Read More

I find Squid to be very useful and have been disappointed that 3.1 is still not in any repositories.  I googled a little to see if anyone has already done this, since reinventing the wheel is not really my thing.  There are a couple tutorials/howtos, but I didn’t really like either approach.  One approach uses the Debian packages, which is fine, but even those are already out date by a few revisions.  Another howto I came across had a broken startup script which caused me about 15 minutes of headache before I just gave up on it.

So, I decided to install Ubuntu 10.04 server on a VM and do this from scratch from source.  This is a default installation with no more than bringing the system fully up to date and installing openssh-server.  I’m assuming you are logged in as a regular user and are in your home directory. 

Off we go!

1.  First thing to do is install all the necessary dependencies:
sudo apt-get install build-essential libldap2-dev libpam0g-dev libdb-dev dpatch cdbs libsasl2-dev debhelper libcppunit-dev libkrb5-dev comerr-dev libcap2-dev libexpat1-dev libxml2-dev libssl-dev pkg-config dpkg-dev curl

2.  Get the file
wget http://www.squid-cache.org/Versions/v3/3.1/squid-3.1.8.tar.gz

3.  Create the log directories
sudo mkdir /var/log/squid3
sudo chown -R proxy:adm /var/log/squid3

4.  Create the cache directories and give them the correct permissions
sudo mkdir /var/cache/squid3
sudo chown -R proxy:proxy /var/cache/squid3

5. Build Squid 3.1
tar -xzf squid-3.1.8.tar.gz
cd squid-3.1.8

./configure –build=x86_64-linux-gnu –prefix=/usr –includedir=/usr/include –mandir=/usr/share/man –infodir=/usr/share/info –sysconfdir=/etc –localstatedir=/var –libexecdir=/usr/lib/squid3 –disable-maintainer-mode –disable-dependency-tracking –disable-silent-rules –srcdir=. –datadir=/usr/share/squid3 –sysconfdir=/etc/squid3 –mandir=/usr/share/man –with-cppunit-basedir=/usr –enable-inline –enable-async-io=8 –enable-ssl –enable-icmp –enable-useragent-log –enable-referer-log –enable-storeio=ufs,aufs,diskd –enable-removal-policies=lru,heap –enable-delay-pools –enable-cache-digests –enable-underscores –enable-icap-client –enable-follow-x-forwarded-for –enable-auth=basic,digest,ntlm,negotiate –enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,getpwnam,multi-domain-NTLM –enable-ntlm-auth-helpers=smb_lm –enable-digest-auth-helpers=ldap,password –enable-negotiate-auth-helpers=squid_kerb_auth –enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group –enable-arp-acl –enable-snmp –with-filedescriptors=65536 –with-large-files –with-default-user=proxy –enable-epoll –enable-linux-netfilter build_alias=x86_64-linux-gnu CFLAGS=”-g -O2 -g -Wall -O2″ LDFLAGS=-“Wl,-Bsymbolic-functions” CPPFLAGS= CXXFLAGS=”-g -O2 -g -Wall -O2″ FFLAGS=”-g -O2″

sudo make
sudo make install

I got the configure options from a doing a squid –v with a repository install.  I had to change enable-ntlm-auth-helpers=SMB to enable-ntlm-auth-helpers=smb_lm.

6.  The startup script references /usr/bin/squid3, the binary is just called squid.  Fix that.
sudo ln -s /usr/sbin/squid /usr/sbin/squid3

7.  Install the startup script to /etc/init.d/ and make it executable
wget https://www.guammie.com/donovan/files/2010/10/squid3
sudo mv squid3 /etc/init.d/

sudo chmod +x /etc/init.d/squid3

8.  Have Squid start on boot
sudo update-rc.d squid3 defaults

And… here’s a configuration file I’ve used.  Real basic, nothing fancy.

Just sudo /etc/init.d/squid3 restart and you should be good to go.

Here are the instructions in a text file in case any formatting is messed up.

Read More