Downgrading a Debian package

After I updated a Debian Wheezy server to Debian Jessie, I was having some problems with VLC. I was using the cvlc to capture and record video streams, but the capture would fail after 2 seconds. A colleague suggested that I leave the OS version alone, but downgrade VLC to confirm that the new version of VLC was causing the problem. A sound idea, but how do you downgrade a Debian package?

I found a variety of helpful sites and came up with the following:

  1. Add the source for the downgraded package the apt config
  2. Specify the target release in the apt config
  3. Use “pinning” to tell apt to use the older versions for the package and its dependencies
  4. Use apt-get install to install the dependencies and the package

Details as follows:

Add the source for the downgraded package the apt config
In this case, I needed to add the
I left my /etc/apt/sources.list as-is and added the following to /etc/apt/sources.list.d/vlc.list:

deb http://mirror.cc.columbia.edu/debian/ wheezy main non-free contrib
deb-src http://mirror.cc.columbia.edu/debian/ wheezy main non-free contrib

deb http://security.debian.org/ wheezy/updates main contrib non-free
deb-src http://security.debian.org/ wheezy/updates main contrib non-free

deb http://mirror.cc.columbia.edu/debian/ wheezy-updates main contrib non-free
deb-src http://mirror.cc.columbia.edu/debian/ wheezy-updates main contrib non-free

I used Columbia’s mirror because it’s fast and geographically nearer than most of the other mirrors. I’m sure not all of those sources were necessary (there are no security-related packages in VLC) but it didn’t hurt anything to include them.

Specify the target release in the apt config
I added the following to /etc/apt/apt.conf.d/80targetrelease to specify that I want Jessie to be the target/default release:

APT::Default-Release "jessie";

Use “pinning” to tell apt to use the older versions for package and its dependencies
I created the file /etc/apt/preferences.d/vlc containing the following. I added to the list of packages as apt-get install failed due to missing dependencies. The pin-priority 1001 was suggested by several sites to be used only in the case of downgrading a package.

Package: vlc
Pin: release n=wheezy
Pin-Priority: 1001

Package: vlc-data
Pin: release n=wheezy
Pin-Priority: 1001

Package: vlc-nox
Pin: release n=wheezy
Pin-Priority: 1001

Package: libvlc5
Pin: release n=wheezy
Pin-Priority: 1001

Package: libvlccore5
Pin: release n=wheezy
Pin-Priority: 1001

Package: vlc-plugin-notify
Pin: release n=wheezy
Pin-Priority: 1001

Package: vlc-plugin-pulse
Pin: release n=wheezy
Pin-Priority: 1001

Package: xdg-utils
Pin: release n=wheezy
Pin-Priority: 1001

Use apt-get install to install the dependencies and the package
I started by trying to apt-get install vlc but it complained about missing dependencies. I added the dependencies to the preferences file as describe in the section above, and then was able to install the following packages from Debian Wheezy:

apt-get install libvlccore5
apt-get install vlc-data
apt-get install libvlc5
apt-get install vlc-nox

Alternatively, I believe I could have skipped the preferences file and setting the pin-priorities by specifying the target Debian version when running apt-get install:

apt-get install -t wheezy libvlccore5
apt-get install -t wheezy vlc-data
apt-get install -t wheezy libvlc5
apt-get install -t wheezy vlc-nox

However, I’m not sure that the desired package version would be preserved after running apt-get dist-upgrade.

References
The following sites were helpful to me while I was figuring out how to do this, and if you are interested in pin-priority and the different values to use in different scenarios, I definitely recommend the first link:

Applying per directory X-Frame-Options headers in Apache

To help prevent against click-jacking, I had applied the following to my Apache 2.2 configuration based on the suggestions described in OWASP’s Clickjacking Defense Cheat Sheet and Mozilla Developer Network’s The X-Frame-Options response header:

Header always append X-Frame-Options SAMEORIGIN

However, my site has certain pages that are included in an iframe on another site, for the purpose of displaying content on digital signage devices. After I added that header, those pages would no longer load in an iframe on the digital signage devices’ browsers.

I thought I might be able to change SAMEORIGIN to ALLOW-FROM and list both the URI of my site and the URI of the digital signage page. However, the HTTP Header Field X-Frame-Options RFC indicates:

Wildcards or lists to declare multiple domains in one ALLOW-FROM statement are not permitted

The pages I wanted to exempt from the X-Frame-Options restriction exist in their own directory, /digitalsignage, so I tried to override the X-Frame-Options header in a .htaccess file:

Header always append X-Frame-Options ALLOW-ACCESS http://example.com

That caused a 500 Server Error. This message appeared in the error logs:

.htaccess: error: envclause should be in the form env=envar

The Header directive must be malformed, but I’m am not sure how. I did not determine how to properly format the statement so as not to produce that error, although several sites have pointed out that some browsers (Chrome, Safari) do not support ALLOW-ACCESS.

I changed the .htaccess file back to SAMEORIGIN, to match what was in the main site configuration:

Header always append X-Frame-Options SAMEORIGIN

I then noted that the response header sent by the server included SAMEORIGIN twice:

Header: SAMEORIGIN, SAMEORIGIN

That’s the expected behavior when using append. It appeared only once after I changed append to set:
Header always set X-Frame-Options SAMEORIGIN

I tried using set instead of append with ALLOW-ACCESS:

Header always set X-Frame-Options ALLOW-ACCESS http://example.com

But it still produced the same 500 Server Error.

After reading the documentation for Apache’s mod_headers, I realized that unset would allow me to remove the X-Frame-Options header from the /digitalsignage directory:
Header always unset X-Frame-Options

That worked, and the pages were successfully included as iframes in a page on the digital signage company’s site.

Canvas enrollments.csv and add_sis_stickiness

I have an enrollments.csv file for Instructure’s Canvas LMS, and I want all of the enrollments in it to “stick”–that is, to survive a batch mode SIS import. These are primarily course designers, and so they have no official standing in the class–and therefore are not in our database, and therefore are not included with regular updates to enrollments.

According to the Canvas documentation for SIS imports:

add_sis_stickiness – Boolean

This option, if present, will process all changes as if they were UI changes. This means that “stickiness” will be added to changed fields. This option is only processed if ‘override_sis_stickiness’ is also provided.

Source: https://canvas.instructure.com/doc/api/sis_imports.html#method.sis_imports_api.create

However, experience tells me otherwise. An inquiry to Instructure’s support confirms that add_sis_stickiness does not apply to enrollments. Enrollments added this way will be deleted following the next enrollments batch import.

The choices to preserve these course designer enrollments are basically to add each one manually using the web UI, or add them via the API. Either option will make the enrollments “stick.”

I opted to use the API. Since I already had a formatted input file, I wrote a short BASH script (with the help of several man pages and a couple StackOverflow pages) that reads the CSV and processes each row, adding the enrollment via the API:

headerrow=1
while read row; do
    if [ $headerrow -eq 0 ]
    then
        # get the SIS course ID
        cid="$(echo $row | cut -d',' -f1)"
        # get the SIS user ID
        uid="$(echo $row | cut -d',' -f2)"
        # get the role / enrollment type
        type="$(echo $row | cut -d',' -f3)"
        # reformat the enrollment type
        tid="$(echo $type | cut -c1 | tr [[:lower:]] [[:upper:]])""$(echo $type | cut -c2-)"Enrollment
        echo course is $cid
        echo user is $uid
        echo type is $tid
        result="$(curl https://[yourcanvassite].instructure.com/api/v1/courses/sis_course_id:$cid/enrollments -H 'Authorization: Bearer [REDACTED]' -X POST -F enrollment[type]=$tid -F enrollment[user_id]=sis_user_id:$uid -F 'enrollment[enrollment_state]=active' -F 'enrollment[notify]=false')"
        echo $result
    fi
    headerrow=0
done <enrollments.csv

MySQL date_add and date_sub functions running against millions of rows

One of my servers runs a query once a week to remove all rows from a Syslog table (>20,000,000 rows) in a MySQL database that are older than 60 days. This was running terribly slowly and interfering with other tasks on the server.

Although the original query used a DELETE statement I’ve used SELECT statements in the examples below.

SELECT COUNT(*)
FROM SystemEvents
WHERE ReceivedAt < DATE_SUB(NOW(), INTERVAL 60 DAY);

That selects about 900,000 rows and takes about 45 seconds.

SELECT COUNT(*)
FROM SystemEvents
WHERE ReceivedAt < DATE_ADD(CURRENT_DATE, INTERVAL -60 DAY);

Likewise takes about 48 seconds.

Is MySQL running a function every time it makes a comparison? I decided to try using a hard-coded date to find out:

SELECT COUNT(*)
FROM SystemEvents
WHERE ReceivedAt < '2015-11-12 12:00:00';

6 seconds! Much faster.

I created a user-defined variable:
SET @sixty_days_ago = DATE_SUB(NOW(), INTERVAL 60 DAY);

Then ran the query:
SELECT COUNT(*)
FROM SystemEvents
WHERE ReceivedAt < @sixty_days_ago;

12 seconds. No 6 seconds, but still a fraction of the original time!

Yammer Enterprise Block Users appears to be limited to 70 users per submission

This is just based on trial-and-error testing, but when I submit over 70 e-mail addresses at:

https://www.yammer.com/[my domain]/admin/blocked_email_addresses

I get one of the following errors:

  • Nginx 502 Bad Gateway
  • We’re sorry, but something went wrong.
    We’ve been notified about this issue and we’ll take a look at it shortly.

When I submit 70 addresses or fewer, it works as expected.

I have not yet discovered a way to block addresses via PowerShell.

Error retrieving Glacier vault inventory via AWS CLI: Unknown options: inventory-retrieval}’

Amazon’s Glacier service is great, but low-cost storage has other costs. For example, you might have a vault but not know what archives it contains. You can retrieve the vault inventory–essentially listing the contents of a directory–but like any Glacier retrieval, it may take several hours.

Using Downloading Vault Inventory using the REST API, I put this together:
C:\>aws glacier initiate-job --account-id - --vault-name my_poorly_named_vault --job-parameters '{"Type": "inventory-retrieval"}'

Which returned:

Unknown options: inventory-retrieval}'

I had taken the command directly from AWS’s example:
http://docs.aws.amazon.com/cli/latest/reference/glacier/initiate-job.html.

(Keep in mind that I had already followed the steps at Installing the Amazon Web Services Command Line Interface and Configuring the Amazon Web Services Command Line Interface.)

According to the documentation for the job-parameters option (http://docs.aws.amazon.com/sdkforruby/api/Aws/Glacier/Types/JobParameters.html#type-instance_method),
valid values are “archive-retrieval” and “inventory-retrieval”.

But the error message says inventory-retrieval}’. Why is it picking up the trailing curly brace and the apostrophe?

I formatted the job-parameters JSON in a file named aws-json.txt, with the curly braces on separate lines:

{
    "Type": "inventory-retrieval"
}

I tried this variation on the initiate-job command:
C:\>aws glacier initiate-job --account-id - --vault-name my_poorly_named_vault --job-parameters file://aws-json.txt

That worked!

The results returned:

{
    "jobId": "y8ugyoNzzusaf6Lv72G3hsjAA6O7nw5bJQ2u6J9TDnJ82_qx-lxnqrhSxIcGvOU1iiXoUhZboiojxsDu8gLQOfiJ7hR2",
    "location": "/123456789011/vaults/my_poorly_named_vault/jobs/y8ugyoNzzusaf6Lv72G3hsjAA6O7nw5bJQ2u6J9TDnJ82_qx-lxnqrhSxIcGvOU1iiXoUhZboiojxsDu8gLQOfiJ7hR2"
}

ORA-00904: “WM_CONCAT”: invalid identifier

I’ve never run into wm_concat before when using Oracle, but I saw it in a statement similar to this earlier today:

SELECT WM_CONCAT(id)
FROM chris.item
WHERE item_type LIKE '%floppy%';

Of course, it didn’t work:

ORA-00904: "WM_CONCAT": invalid identifier

It looks like WM_CONCAT isn’t a supported command (Error when using wm_concat function), and possibly the code I was debugging stopped working when the database was upgraded to 12c.

WM_CONCAT is intended to return the results as a comma-separated list. A strange idea, but it this particular case it almost made sense. A supported replacement that aggregates results into a single string is LISTAGG.

SELECT LISTAGG(id, ',')
WITHIN GROUP (ORDER BY id)
FROM chris.item
WHERE item_type LIKE '%floppy%'
GROUP BY 1;

In this case, I did not want the results separated into groups, so I used GROUP BY 1 to put all results in the same group.

There are a number of other string aggregation techniques for Oracle, if you ever find yourself with such a need.

Connecting to Oracle instance in AWS RDS

I inherited a development Oracle instance in Amazon Web Services’s Relational Database Service (AWS RDS) from a former colleague. I wanted to know what was in the database, and for that, I had to connect to it. I decided the most straightforward way would be with the simplest tool: SQL*Plus.
Continue reading Connecting to Oracle instance in AWS RDS

Holding messages in the Postfix mail queue

Earlier today, someone sent a large number of email messages each containing a 30 megabyte attachment to users on our servers. This put our Postfix servers under a heavy load and caused some messages to be delivered after a substantial delay. (This was in part due to additional processing done by our servers, I’m sure a plain-jane Postfix instance could have handled it without an issue.)

This was no good. The sender–let’s call it bigbulk.test.com–should be able to send such messages, but not at the expense of normal mail delivery. I needed to change the priority of those messages to let other messages take priority.

The first thing I did was to hold all the mail from bigbulk.test.com:

  • Retrieve the mail queue
  • Select only the lines containing bigbulk.test.com
  • Select only the queue ID, the first item listed in each result
  • Pass the queue IDs to the postsuper -h command

mailq | grep bigbulk.test.com | cut -d ' ' -f 1 | xargs -n1 postsuper -h

But what about delivering them? I sent them in small batches so as not to overload the server again.

  • Retrieve the mail queue
  • Select only the lines containing bigbulk.test.com
  • Select only the queue ID (stripping out the hold-indicator)
  • Select only the first 5 results
  • Pass the queue IDs to the postsuper -H command

mailq | grep bigbulk.test.com | cut -d '!' -f 1 | head -n5 | xargs -n1 postsuper -H

Password Form Usability: Duke Energy

When you’re singing up with any online service, picking a password is always trouble. What weird password requirements does this service have? Tonight I had that question with Duke Energy’s sign-up form. Fortunately, they had a Help icon that described the password rules:

Duke Energy Sign-up Form: password rules
Text reads: Password are case-sensitive and must contain at least eight letters or numbers.

OK, minimum of 8 alphanumeric characters. Easy enough.

I use Password Safe as my password manager. I have no idea what any of my passwords are: they are auto-generated random strings. I generated a new random password and entered it into the form:

Duke Energy Form: password feedback mixed messages
Text reads: Must be at least eight characters, contain one letter and one number and no special characters.

OK, so the password rules are a little different than what was initially described. But which symbols are special characters? And this screen is sending mixed-messages: if the password strength is rated Strong, why is it not valid?

I updated the password generation rules in Password Safe to generate a new password, assuming that only alphanumeric characters are allowed:

  • Use lowercase letters, minimum 1
  • Use upper case letters, no minimum
  • Use numbers, minimum 1
  • No symbols

And, since no symbols are included, I increased the password length to 16 characters. This new password was accepted, but the feedback indicates that the password is only moderately strong!

Duke Energy form: a valid password, rated moderately strong
The password is valid, but rated only moderately strong

I’m guessing that the code that generates the password strength indicator is from a 3rd-party and has no knowledge of Duke Energy’s password rules.

My problems with this, from a usability perspective:

  1. The password rules should apparent and described accurately.
  2. The password strength indicator should be aware of any password rules, and should describe a rejected password as such.

From a security perspective, I don’t see why any keyboard characters should be restricted. More characters to choose from means more complexity. Plus, if there were no character restrictions, it would be easier to describe the rules–and use an accurate 3rd-party password-strength tool.