Saturday, August 30, 2014

The Business of Moving People

Not infrequently, I am asked about my academic specialty. When I reply "operations research", as I usually do, I'm often met with a polite but blank stare. This happened to me a couple of times at a recent party. If the questioner inquires further, I'll try to give an example or two of what OR folks work on in the real world, while omitting any references to "integer programming models" or "Markovian decision processes".

Two recent articles in the popular press do a nice job of illustrating real-world problems to which operations research models and techniques are applied. Coincidentally, they both have to do with getting people from point A to point B, hence the title of the blog post. I thought I would link them here.

The first is an article at CityLab about bike sharing programs: "Balancing Bike-Share Stations Has Become a Serious Scientific Endeavor". Bicycle sharing services are a relatively recent innovation, primarily in urban environments. By allowing individuals to borrow or rent a bicycle (owned by the service) for commutes or to run errands, the services help drive down traffic congestion and air pollution, while presumably turning a profit (at least if privately owned). As with car rental services, bicycle sharing services allow you to pick up a bicycle at one location (station) and return it at another, a necessity if the bicycles are going to be used for commuting. (You do not want the bicycle out of service for eight or nine hours while the renter is at work, and the renter does not want to pay for entire day's use of the bicycle.) Also in common with car rental agencies is that this can result in some locations developing a surplus of available bicycles while others are starved. The CityLab article focuses on the twin problems of predicting where the imbalances will be and efficiently routing vehicles used to rebalance the stations (moving bicycles from overpopulated stations to underpopulated stations). Neither "analytics" nor "operations research" is mentioned in the article, but clearly that is what is going on.

The second article, appearing in the New Republic, has the rather amusing title "Why the $#&@! Did Your Airline Cancel Your Flight Today? They Had a Very Good Reason." The author, Amy Cohn, is an industrial and operations engineering professor at Unmentionable University in Ann Arbor. She gives a very persuasive argument, understandable I think by lay people, that airline flight cancellations are not motivated by concerns that the flight is underbooked and therefore unprofitable, but rather by a complex planning system (more operations research) reacting to problems and trying to find a solution that does as little damage as possible to the overall schedule, in the process hopefully inconveniencing as few passengers as possible. (Yes, I realize I just said something positive about someone from Ann Arbor. As soon as I post this, I will go do some act of charity in contrition.)

Of the two articles, I think I will be more likely to cite the first one when someone asks just what operations researchers do -- not because the second article is less compelling (or because its author has an unfortunate work address), but because associating "operations research" and "flight cancellations" may not have a positive impact on my popularity.

(Hat tip to @or_exchange, @HarlanH, @portlypoor and @Neil_Irwin for tweets leading me to those two articles.)

Update: Another tweet led me to a blog post by Jean-Francois Puget titled "Optimizing Real Time Decision Making". It uses a taxi service to illustrate how OR tools (optimization) can actually improve aggregate system response time for customers by intentionally delaying responses to some service requests (another people-moving example). Elevator dispatch algorithms provide a similar but not identical application (but good luck explaining to someone that there is actually a reason for them to wait for a car while another car sits idle).

(Hat tip to @dualnoise and @menkris for the tweet that led me to J-F's post.)

Thursday, August 28, 2014

A Musical Random Walk

I just read a blog post here at MSU about The Infinite Jukebox, a web tool that will analyze a song (either one you upload or one from their library) and map out links between segments of the song that have approximately the same beat. You then see a visualization of the song as a loop, with the links (arcs) added, forming a directed graph. (The visualization does not display arcs as arrows, though, just as, well, arcs.)

When you replay the song in The Infinite Jukebox, as it passes the tail node of each arc, it randomly chooses whether to keep going on the original path or cross the arc and continue playing, along the original path, from the head node. Thus the "infinite" part of the site's name -- the song will continue until you stop it.

I don't know the algorithm they use for playback, but it appears superficially to be a Markov process, so perhaps this really is a random walk over the song's graph.

If you want to try it out, I can recommend The Trashmen's version of "Surfin' Bird", which I was happy to discover is in the site's library. The randomly looped version sounds suspiciously like the original (which is what I expected). Although the visualization of the progress through the song is interesting, for the full multimedia experience you'll want to have the You Tube video playing (muted) on a second screen.

Sadly, their library does not include "Papa-Oom-Mow-Mow" by The Rivingtons, and I don't happen to own a copy, so I could not compare the action of The Infinite Jukebox on that with what it did to "Surfin' Bird". As I listened to the looped "Surfin' Bird", though, I couldn't help but think "and thus another enhanced interrogation technique is born".

Friday, August 15, 2014

Scheduling Instability

Fellow OR blogger Laura McLay recently wrote a post "in defense of model simplicity", which is definitely worth the read. It contains a slew of links to related material. As I read it, though, my contrarian nature had me thinking "yes ... as long as the model is not too simple".

A recent piece in the NY Times ("Working Anything but 9 to 5") had me thinking again about model simplicity. The gist of the Times article is that irregular (erratic, chaotic) work schedules for hourly employees, coupled with short notice of those schedules, creates stress for some families, particularly single-parent families where daycare needs to be arranged. Fairly deep into the article, I found someone saying what I was thinking:
Legislators and activists are now promoting proposals and laws to mitigate the scheduling problems. But those who manufacture and study scheduling software, including Mr. DeWitt of Kronos, advocate a more direct solution: for employers and managers to use the software to build in schedules with more accommodating core hours.

“The same technology could be used to create more stability and predictability,” said Zeynep Ton, a professor at M.I.T. who studies retail operations.
The multiperiod nature of scheduling models tends to make them a bit chewy, especially when you need to coordinate schedules of multiple individuals (or machines, or venues) across blocks of time while dealing with multiple constraints and possibly multiple conflicting criteria. As with most discrete optimization problems, simplicity of the model tends to pay dividends in reduced execution time. That said, models that take into account the cumulative schedules of workers do exist. In the U.S., the Federal Aviation Administration has rules limiting consecutive hours in the cockpit and rest times for pilots flying passenger routes. (If you think this is another example of the "nanny state", have a look at this article about an Indian airliner imitating a roller coaster.)

There are at least four dimensions in play that may be contributing to the high variance in schedules for some employees:
  1. Is the employer concerned enough about scheduling chaos to make it a policy to mitigate variance in schedules of individual employees?
  2. If so, is the policy being implemented?
  3. How does the employer assess tradeoffs between smoothing of schedules and other criteria (adequate staffing, staffing cost, matching staff skills with needs, ...)?
  4. Can/does the scheduling software (assuming software is used) implement variance-mitigation either as constraints or in the objective function?
The second dimension, in my opinion, is one best left to the management folks (my erstwhile colleagues). The fourth dimension clearly belongs to the OR crowd: we need to make provision for those sorts of constraints/criteria, even at the risk of making models more complicated (cough Laura cough),  and we need to communicate clearly to the users that those provisions exist (and how to use them). I'm inclined to think that the first and third dimensions straddle the disciplines of OR (or, if you prefer, analytics) and management. The management folks can provide evidence-based arguments of how more worker-friendly schedules can improve employee performance and retention, while those of us in OR can perhaps help assess the trade-offs and quantify costs and benefits.

[Update: Starbucks, at least, appears to be implementing policy changes. Thanks to Tallys Yunes for this link: "Starbucks to Change Scheduling as Some Employees Struggle".]

I'll conclude with a wonderful anecdote, told to me long ago by a colleague from our statistics department, about model simplicity. He attended a colloquium by what would now be called a biomathematician (I don't think the category existed back then). The presenter had developed a model for arterial blood flow in the human body, using a system of nonlinear differential equations. The right-hand sides of the equations represented a forcing function. The system was too complex for the presenter to solve, so he simplified things by solving the homogeneous case (setting the right sides of the equations to zero). At the conclusion of the talk, after the usual round of applause, the presenter asked if there were any questions. He got at least one: "If the forcing terms are all zero, doesn't that mean the patient is dead?" It does -- the simpler model that he had solved required that the body's heart not beat.

Simplicity in models is good, but arguably not worth dying for.

Thursday, August 14, 2014

Mythbuntu: The Upgrade from Hell

I foolishly let Mythbuntu update to version 14.04 overnight a few days ago. The installer ran into problems (which I could not decipher) regarding updating the MythTV database. I let it upload two separate bug reports and did my best to soldier on. When the installation was finally over, the back end would not load, which means it was doing a pretty good impression of a (software) brick.

I got a message telling me that the database schema was several versions out of date, and instructing me to run either mythtv-setup or mythbackend in order to update the database. I tried both (from a console) and both failed (with mythtv-setup trying to start mythbackend). Eventually, I discovered the first (of multiple) problems.

Problem 1: Time zone data


Somewhere along the line I got a message that MySQL, the database program used by MythTV, was lacking time zone data. I followed some instructions I found, and fixed it by running the command

mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql -uroot -p mysql

supplying a password when prompted. After that, I was able to run mythtv-setup and mythfilldatabase. Apparently mythtv-setup automatically updated/fixed the database.

Problem 2: Security PIN


In mythtv-setup (general settings) I discovered that the security PIN was blank. According to the prompt, that will prevent any front end from connecting to the back end. Per the prompt, I changed it to 0000, allowing all front ends to connect. (I only have one, on the same machine as the back end, so that's fine.)

The IP address for the back end was correct, but since I have no remote front ends, I changed both the local and master addresses to 127.0.0.1. That was probably unnecessary, but I mention it just in case it had any unintended effect. [Update: In hindsight, I realized that might interfere with my using MythWeb to program recording schedules from other devices on my home LAN, so I switched both the local and master addresses back to the machine's static unrouteable IP on the local LAN, with no harm done.]

Problem 3: No Back End Service


Even after successfully running the setup program, the front end still could not connect to the back end .. because the back end was not running ... because it failed silently every time it was started. The good news was that I could start the back end manually (running mythbackend in a console), after which the front end could connect. It came as quite a relief to see that recordings, channel information and recording schedules had all survived the upgrade. Still, you can't do scheduled recordings very effectively is someone has to log in and start the back end manually each time.

Fortunately, I was able to get help from Thomas Mashos in the Mythbuntu Community on Google+. I could start the back end manually because my personal MythTV configuration file (~/.mythtv/config.xml) survived the upgrade intact. The system configuration file (/etc/mythtv/config.xml, also symlinked to /home/mythtv/.mythtv/config.xml) must have been replaced by what amounted to a placeholder. The host, user name and database name fields were empty; the password field was filled in incorrectly (or maybe that was the correct password for a null user access a null database on a null system?). I edited those four entries to match my config file and (woo-hoo!) the back end resumed playing nicely.

Problem 4: No Sound


Once the back end was running, I played back part of a saved recording to verify that things were working okay. The good news was that the video played fine. The bad news was that there was no audio. This turned out not to be limited to MythTV: I couldn't get the system to make sounds, period. (In contrast, the system had no trouble getting me to make sounds. Very, very aggravated sounds.)

I searched hither and yon on Google, and discovered that a nontrivial number of users have (or had) sound problems, many of them as a result of the upgrade to Mythbuntu 14.04. I've lost count of the number of options I tried, including purging the alsa-base, linux-sound-base, pulseaudio and pulseaudio-module-X11 packages (all at the same time), reinstalling them (I even deleted the archived copies and forced the package manager to download them again), and of course rebooting.

Eventually I stumbled on a fix (woo-hoo!). I had previously poked around in the ALSA mixer (by running alsamixer in a terminal) and unmuted the obvious suspects. I gave it one more try, disabling Auto-Mute Mode (because the last thing I want right now is something muting the sound again) and playing with settings I didn't understand. It turns out that unmuting both "S/PDIF" and "S/PDIF Default PCM" restored the sound. I have no idea what they stand for, let alone why they were muted. From aplay -L, I can see that they're associated with my NVidia graphics card (which also handles sound). So are eight bazillion other things, and I'm not sure why this one is special.

Problem 4.5: No Sound After Reboot


One might hope that unmuting the two settings above would be a permanent fix. Not quite. After a reboot, "S/PDIF Default PCM" remains unmuted but <sigh>"S/PDIF" finds itself muted again</sigh>.

In alsamixer (in a terminal), I unmuted both, then ran sudo alsactl store to store the correct settings and added alsactl restore to /etc/rc.local, which should hypothetically be run each time the system boots (but not, I believe, during reboots). The command works fine from a terminal, but after a full shutdown/start I find S/PDIF once again clobbered; so either /etc/rc.local is not running or, more likely, it restores the correct sound settings before the evil program that's muting S/PDIF gets executed.

Just for grins, after a reboot (and without restoring the ALSA settings, i.e., with a silent PC), I went into MythTV's audio setup menu. It showed a sizable number of options that all mentioned the correct NVidia card and matched what I saw when I ran aplay -L. The default choice produced no sound. I switched to a different setting (from "dmix" to "front", as I recall), ran the sound test and it worked. Unfortunately, while the setting survived a reboot, sound did not. So I tried again, switching to "ALSA:iec958:CARD=NVidia,DEV=0", and that setting works. It has survived three restarts, so I'm a cautiously optimistic that my long ordeal (most of two days chewed up fixing this) is finally over -- pending the next upgrade, of course.

Problem 5: [update] no MythWeb


After posting this, I discovered that MythWeb was not running. This turns out to be a known bug, due to a difference in the Apache versions used by Mythbuntu 14.04 and its predecessor. The fix specified in the bug report, which I repeat here, worked and MythWeb now runs as expected.
$ sudo rm /etc/apache2/
$ sudo dpkg-reconfigure mythweb
$ sudo /etc/init.d/apache2 start


Thursday, August 7, 2014

Fewer Zeros

A question I saw online not too long ago caused me a flashback to my days teaching linear programming (LP) to masters students. The poster had developed an optimization model -- I can't recall if it was an LP, a quadratic program (QP) or a mixed-integer program (MIP) -- and had no problem solving it. He was, however, unhappy with the number of decision variables taking value zero in the optimal solution, and was quite insistent on being shown how to get another (optimal?) solution containing fewer zeros.

My first reaction took me back even further, to my high school years. If the optimal solution has a bunch of zeros, and it's the unique optimal solution, then that's that. My second recollection was my lecturing on post-optimal sensitivity analysis, and the wonders of multiple optima (which, in the case of LP, you can detect by examining the reduced costs of the variables that did not make it into the final simplex basis). Most people, understandably, are happy to yank the final solution off the printer (or dump it to a file, or whatever) and run with it. I gave two rationales for poking it a bit to see if there were other optima, or perhaps near-optima. Both rationales were illustrated by product mix models (deciding what products to manufacture, in what quantities, given resource constraints, unit profit margins, etc.).
  1. Not all "optimal" solutions are created equal. Unless you are dabbling in multiobjective optimization, the model optimizes a single criterion function. There may be other criteria that can be used to distinguish between multiple optimal (or near-optimal) solutions. For instance, does the solution you found require a massive revamping of the production schedule, while another optimal solution lurks that is a minor tweak of the current schedule? Does the optimal solution found end production of the product your boss developed or championed back when, the one that got her promoted to her current position, while another optimal solution continues to produce it?
  2. Does your optimal solution kill off some products and, if so, are there other optimal/near-optimal solutions that continue to make them? Zeroing out production of PCs in favor of laptops and tablets may make sense given the current market (as reflected in the current values of demand and revenue parameters in your model), but the marketing weenies may tell you that, should PCs suddenly become hot, you will have a harder time climbing back into the market if you have not retained at least some presence. You may also lose some design or manufacturing competence for a particular product type if you stop making it entirely.
The second rationale is consistent with the original poster's desire for fewer zeros. I'm pretty sure that, given enough time (and enough beers), I could come up with similar rationales in contexts unrelated to production scheduling.

So how does one reduce the number of zeros in the solution? I'm assuming, for simplicity, that you would ideally like none of the variables to be zero; it's easy to tweak what follows to reflect a desire to make some variables positive combined with an indifference to the fate of other variables.

The first question you need to ask yourself is what you are willing to give up in trade, because it's entirely possible you are staring at the unique optimal solution (or that other optima don't do much better as far as number of zeros is concerned). Let's say that your original problem was to maximize $f(x)$ subject to $x\in X$, where $X$ is the original feasible region (defined by various constraints that I don't need to write done here). Let's say further that $x^*\in\Re^n$ is the optimal solution and $f^*=f(x^*)$ is its objective value. You decide that you are willing to sacrifice up to $\epsilon \ge 0$ from the objective value to get a solution with fewer zeros, or maybe up to $\epsilon$ for each variable whose value switches from zero to positive, or something along those lines. (If $\epsilon = 0$, you'd better hope there is an alternate optimum.)

The second question you need to ask yourself is, for each variable $x_i$, what is the minimum value ($L_i > 0$) that you would be willing to accept. Producing 50 truckloads of corn flakes per week combined with one box of wheat flakes is not a whole lot different from just making corn flakes.

The methods discussed below do not exhaust all possibilities; they're just what come to mind at the moment.

Method 1: Utopia


We can add the constraint $f(x)\ge f^*-\epsilon$ and pick a new objective function designed to push $x$ toward having fewer zeros. One possibility is to aim for a "utopia point", an idealized solution you likely will never reach. So consider the following problem (which remains an LP if $f()$ is linear, and is quadratically constrained but convex if $f()$ is concave -- anything else and you're on your own).
\[ \begin{array}{lrcl} \textrm{minimize} & \sum_{i=1}^{n} & w_{i}(y_{i}+z_{i})\\ \textrm{s. t.} & x & \in & X\subset\Re^{n}\\ & f(x) & \ge & f^{*}-\epsilon\\ & x_i - y_i + z_i & = & u_i \ (i\in 1,\dots,n) \\ & y,z & \ge & 0 \\ \end{array} \] The utopia point $u\in \Re^n$ is a vector of ideal values (presumably positive) for the $x$ variables; the weights $w_i > 0$ reflect your priorities for getting close the utopian value $u_i$ for each $x_i$. (Auxiliary variables $y$ and $z$ are used to get the absolute difference between $x$ and $u$.) The objective minimizes $\left\Vert x - u \right\Vert_1$, the $L_1$ norm of the difference; you can use a different norm ($L_0$ stays linear, $L_2$ results in a quadratic program) if you wish.

Method 2: Bounds


A second, perhaps simpler, approach is to assert a strictly positive lower bound $L_i > 0$ on every variable $x_i$.
\[ \begin{array}{lrcl} \textrm{maximize} & f(x)\\ \textrm{s.t.} & x & \in & X\subset\Re^{n}\\ & x & \ge & L\\ \end{array} \]There is a danger that this could make the problem infeasible (if you are too aggressive in the choice of $L$). Assessing the trade-off between objective value ($f$) and bounds ($L$) is largely a matter of trial and error. Dual variable values (LP) or KKT multipliers (QP) may allow you to incrementally adjust the bounds until a satisfactory outcome is achieved.

Method 3: Binary Variables


Yet another possibility is to introduce binary variables reflecting which original variables are "turned on", and then optimize the selection of those variables. Let $z_i\in\{0,1\}$ take the value 1 if $x_i$ is "sufficiently" positive and 0 otherwise. Suppose that we also have weights $w_i > 0$ reflecting the perceived importance of "turning on" each variable $x_i$. (If you just want as many nonzeros as possible, set $w_i = 1$ for all $i$.) Consider the following model:\[ \begin{array}{lrcl} \textrm{maximize} & \sum_{i=1}^{n} & w_{i}z_{i}\\ \textrm{s. t.} & x & \in & X\subset\Re^{n}\\ & f(x) & \ge & f^{*}-\epsilon\\ & x_i  & \ge & L_i z_i \ (i\in 1,\dots,n) \\ & z & \in & \{0,1\}^n \\ \end{array} \]We maximize the aggregate value of the choice of $x$ variables taking (sufficiently) positive value, while maintaining feasibility and taking an acceptable hit in the value of the original objective function. The computational cost is that we have converted our original problem, which might have been a relatively easy to solve LP or QP, into either a mixed-integer linear program or a mixed-integer quadratically constrained program, either of which will be harder to solve.

Method 4: Binary Variables Again


A variation of the previous approach is to maximize $f$ subject to a restriction on the number of zeros in the solution:\[ \begin{array}{lrcl} \textrm{maximize} & f(x)\\ \textrm{s. t.} & x & \in & X\subset\Re^{n}\\ & x_i  & \ge & L_i z_i \ (i\in 1,\dots,n) \\ & \sum_{i=1}^{n} w_{i}z_{i} & \ge & K \\ & z & \in & \{0,1\}^n \\ \end{array} \]where the weights $w_i$ again indicate relative importance of getting strictly positive values (all 1 if you just want to achieve a certain number of nonzeros) and $K$ is the minimum value/minimum count of nonzeros that you are willing to accept. Once again, assessing the trade-off between objective value and nonzeros is a trial and error process.

Wednesday, August 6, 2014

Updated Benders Example

Two years ago, I posted an example of how to implement Benders decomposition in CPLEX using the Java API. At the time, I believe the current version of CPLEX was 12.4; as of this writing, it is 12.6.0.1. Around version 12.5, IBM refactored the Java API for CPLEX and, in the process, made one or more non-backward-compatible changes that broke the sample code. This morning, I posted an updated version.

While I was at it, I decided to create a Git repository for the code on Bitbucket. You can access the repository at https://bitbucket.org/prubin/bendersexample. If you click the "Downloads" link in the navigation menu, you will be led to a page from which you can download the source code in a Zip archive. (There is no executable for this project, just the source.) There is also a link to an issue tracker in case you run into a bug (not that I ever write code with bugs in it) and wish to report it.

UPDATE: I moved the code to a new repository. The URL is https://gitlab.msu.edu/orobworld/BendersExample. Click "Files" in the left side navigation menu to get to the source code. There will be a link (upper right) on that page to download the source in a Zip archive.

The code and issue tracker should be accessible to anonymous users (no login required). If not, hopefully someone will let me know. As with the blog, the code is released under a rather permissive Creative Commons license.