Monday, January 27, 2014

Qt's Drag-and-Drop Architecture for Python and PyQt5
Pt. 4, Coding a Drag Source

Code for a Drag Source

The whole code for the example program is at this pastebin link. Here we will look at it in pieces. Let's implement a drag-source.

class SorcWidj(QLabel):
    '''A simple drag-source with ability
    to recognize the start of a drag motion
    and implement the drag.'''
    def __init__(self,text):
        super().__init__()
        self.setText(text)
        self.mouse_down = False # has a left-click happened yet?
        self.mouse_posn = QPoint() # if so, this was where...
        self.mouse_time = QTime() # ...and this was when.

SorcWidj is just a QLabel with a few extra features, especially three members where we note the time and place of a click. We set these fields in the following method:

    def mousePressEvent(self,event):
        if event.button() == Qt.LeftButton :
            self.mouse_down = True # we are left-clicked-upon
            self.mouse_posn = event.pos() # here and...
            self.mouse_time.start() # ...now
        event.ignore()
        super().mousePressEvent(event) # pass it on up

When the user clicks down with any mouse button on this widget, mousePressEvent is entered. For this example we are only supporting left-clicks and left-drags. So, if this is a left-click, we save the position (x and y in local coordinates) and we start a millisecond timer going. Why do we want this info? Here's why:

    def mouseMoveEvent(self,event):
        if self.mouse_down :
            # Mouse left-clicked and is now moving. Is this the start of a
            # drag? Note time since the click and approximate distance moved
            # since the click and test against the app's standard.
            t = self.mouse_time.elapsed()
            d = (event.pos() - self.mouse_posn).manhattanLength()
            if t >= QApplication.startDragTime() \
            or d >= QApplication.startDragDistance() :
                # Yes, a proper drag is indicated. Commence dragging.
                self.doSomeDraggin(Qt.CopyAction|Qt.MoveAction)
                event.accept()
                return
        # Move does not (yet) constitute a drag, ignore it.
        event.ignore()
        super().mouseMoveEvent(event)

This logic is taken straight from the Qt documentation. Whenever the mouse moves above our widget with a button down, mouseMoveEvent() is called. If our mousePressEvent decided it was valid (in this case, if it was a left-click), we note the time t and distance d since that click event.

The application has a platform-dependent standard for the amount of time and distance that the mouse should move before the motion constitutes a "drag". We test against those standards. If they are met, then we initiate a drag, accept the event, and exit. Otherwise we pass the event along. Now, let's get to the beef. How do we initiate a drag?

    def doSomeDraggin(self, actions):
        # Create the QDrag object
        dragster = QDrag(self)
        # Make a scaled pixmap of our widget to put under the cursor.
        thumb = self.grab().scaledToHeight(50)
        dragster.setPixmap(thumb)
        dragster.setHotSpot(QPoint(thumb.width()/2,thumb.height()/2))
        # Create some data to be dragged and load it in the dragster.
        md = QMimeData()
        md.setText(self.text())
        dragster.setMimeData(md)
        # Initiate the drag, which really is a form of modal dialog.
        # Result is supposed to be the action performed at the drop.
        act = dragster.exec_(actions)
        defact = dragster.defaultAction()
        # Display the results of the drag.
        targ = dragster.target() # s.b. the widget that received the drop
        src = dragster.source() # s.b. this very widget
        print('exec returns',int(act),'default',int(defact),'target',type(targ), 'source',type(src))
        return

Once you have decided that a drag is necessary, this is how you initiate it. Let's go over it in pieces.

        dragster = QDrag(self)

The QDrag object represents the drag. We will initialize it and then execute it much as we execute a modal dialog.

        thumb = self.grab().scaledToHeight(50)
        dragster.setPixmap(thumb)

The QWidget.grab() method was added in Qt5. It returns a pixmap of that widget as it looks now. Here we grab a pixmap of our own widget (we take a selfie!). And scale it to be no more than 50px high. We apply our selfie pixmap to the drag object. It will be displayed under the cursor and follow it around during the drag. The pixmap is optional; in your application you might not use it, or you might use a pixmap of something else.

        dragster.setHotSpot(QPoint(thumb.width()/2,thumb.height()/2))

Another optional step repositions the selfie thumbnail so that it is centered under the cursor. Without this, cursor will be at the top left corner of the thumbnail pixmap.

        md = QMimeData()
        md.setText(self.text())
        dragster.setMimeData(md)

This is the heart of drag initiation. You are supposed to package the data that is being dragged in the form of MIME data. MIME began as a standard for attaching arbitrary data to emails. It has been extended to allow passing data between any programs.

In principle you can package just about anything as MIME data. You load the QMimeData object with the data and set it to have the appropriate MIME type. Then you assign it to the drag object.

Why do this? Because you don't know where the drag is going. It isn't necessarily going to some other part of your app. It might be dropped anywhere, on any app, or on the desktop. By packaging the data as a MIME type, you ensure that any other application that supports MIME can accept it.

In this example, we are punting the whole issue and setting the MIME data to the current text of this QLabel. If your application has to pass something more structured than simple text, you will have to study the QMimeData reference and the Qt page on MIME data.

        act = dragster.exec_(actions)

This statement initiates the drag operation. Just as with a modal dialog, you exec_() the drag object. The argument is the set of actions you will permit the drop to perform: some OR-combination of Qt.MoveAction, Qt.CopyAction, and Qt.LinkAction. (We passed these from the mouseMoveEvent() code.)

Once the drag starts, this code is effectively suspended until the user lets go of the mouse. In Linux and Mac OS, signals continue to be processed and other threads of the app keep executing. In Windows, the whole app stops.

Eventually the user will relax her finger on the mouse and end the drag. Then, in theory, the action code that was actually performed is returned. The statements that follow in our example print out what can be learned after the drag completes: the supposed action, the default action, and the identities of the source widget (this one) and the target widget that accepted the drag.

If and only if the drop is accepted by a Qt widget in this same application, the returned action will be one of Qt.MoveAction, Qt.CopyAction, or Qt.LinkAction. And the widget returned by the target() method of the drag object will be a reference to the widget that accepted the drop.

If the drop completes in some other application, whether written in Qt or not, the returned action will be 0, and the value returned by dragster.target() will be None. Those things will also be the case if the drop simply doesn't complete, for example if the user releases the mouse over some location that doesn't accept drops.

This is a bit of a hole in the Qt drag-and-drop support. There is no standard way to tell if a drag completed successfully in another app's window, or just didn't complete.

Qt's Drag-and-Drop Architecture for Python and PyQt5
Pt. 3, The Drag Source

A drag source in Qt is any QWidget derivative in which:

  • MousePressEvent() is implemented to note when and where the mouse is clicked-down
  • MouseReleaseEvent() is implemented to note that the mouse is no longer clicked-down
  • MouseMoveEvent() is implemented and detects when the mouse has moved far enough, or been down long enough, to show that the user wants to begin a drag, and then...
  • It starts a drag by creating a QDrag object, loading it with data, and executing it.

One way to look at drag is that it is a peculiar kind of modal dialog, much like a QFileDialog. When the user manipulates the mouse in a certain way, you know you should initiate this "dialog". When the drag pseudo-dialog completes, you have a result that sometimes indicates what happened. Other times you are left guessing.

In the next post, we'll look at some real code to see how this is done.

Qt's Drag-and-Drop Architecture for Python and PyQt5
Pt. 2, The Drop Target

The Drop Target

A drop target in Qt is a widget (any QWidget derivative) in which:

  • The widget at some time sets self.setAcceptDrops(True)
  • The widget implements the dragEnterEvent() method
  • The widget implements the dropEvent() method

The widget may optionally implement the dragMoveEvent() and/or dragLeaveEvent() methods, but these are not usually required.

When a widget sets acceptDrops to True, it will be called at dragEnterEvent() when the mouse cursor of a user drag crosses into the widget's boundary rectangle. In this method your code inspects the purpose and content of the drag and decides if it's for you. The code can look at the modifier keys (is this an Alt/Option- or Control-drag?). It can look at the mouse buttons (left-button drag, or right-button?). It can interrogate the type and even the content of the data that would be dropped.

If dragEnterEvent() rejects the drag, nothing further happens—the mouse cursor might change to show that this drag is not allowed, or not. But no other drag-related event methods will be delivered to this widget for this drag.

If the dragEnterEvent() code indicates that the drag is acceptable, more things will happen. The cursor will move across your widget's surface, and your dragMoveEvent(), if implemented, will be called repeatedly as it does so. The cursor may wander out of your widget without dropping; if so, your dragLeaveEvent() will be called, if implemented.

Or, the user may release the mouse button over your widget. Then your dropEvent() is called. At this point you can still reject the drag; otherwise your code is supposed to take the data out of the drag and do something with it.

If this sounds complex, it is. Fortunately many Qt widgets handle drops automatically. For example, QListView and QTableView handle dragging and dropping, and it's a blessing that they do.

In the next post we'll review the design of a drag source widget.

Qt's Drag-and-Drop Architecture for Python and PyQt5
Pt 1, an Overview

In the following series of posts I will review the classes and methods needed to implement Drag-and-Drop functionality in a Qt5 program written in PyQt5.

The official documentation (for C++ of course) is found in this overview. It contains links to the reference pages for most of the the important classes, and it covers the basics for a C++ programmer. Doing the mental translation from C++ syntax to Python/PyQt syntax is a habit that the Python programmer needs to learn.

However, I found the official overview somewhat confusing. One problem is that it does not clearly distinguish the design of a drop target, a widget that receives dropped data, and the design of a drag source, a widget that recognizes a mouse drag motion and initiates a drag. These two are quite distinct. They are executed at different times and might be executed in different apps, with the drag beginning in one app and the drop ending in another. Thus you can have a drop without a drag and vice versa. They use different classes and require you to override different class methods. All told, they need separate treatment, which I will give them in this series of posts.

The User's View

The user thinks of drag-and-drop as a single smooth mouse operation: click down on something; move the mouse to something else; let go. During the drag the mouse cursor may change its appearance in some familiar way, perhaps acquiring a plus-sign to indicate a copy will happen or a slashed-circle to indicate that no dropping is allowed.

It is possible for the mouse to acquire a little thumbnail image of the thing being dragged, as a reminder. For example when dragging text in an editor, a translucent copy of the dragged text, or part of it, may follow the mouse cursor.

The user will often be dragging from one place in an application to another place in the same application: dragging a paragraph of text from one place in a document to another, for example; or dragging a list item to a different position in the same list.

But it may be that the user is dragging something from one application and dropping it into a completely different application: for example, dragging text from a Qt editor and dropping it on the host Desktop as a "clipping"; or dragging a URL from a browser window and dropping it into a Qt widget of some kind.

All in all, drag-and-drop is a simple, quick, familiar operation to the user—or should be. But making it happen at the level of program code turns out to be quite complicated.

The Program's View

To the program written in Qt (and specifically PyQt5: in these posts, "Qt" and "PyQt" are synonyms), it is actually not correct to speak of "drag-and-drop" as a single thing. There is drag, the initiating of a drag operation, and there is drop, the delivery of content to a target. These two use completely different classes and methods and are designed in isolation from each other.

Moreover, remember that the drag might start in a completely different application, so it arrives at your Qt code as an unheralded drop with data you didn't prepare. Or a drag that you initiate in your Qt code might end being dropped in some completely unrelated program.

The Qt drag-and-drop support is rather unhelpful in these cases of dragging between different applications. It only works fully as documented when the drag and drop are between widgets in the same application. I'll point out these issues as they come up.

In the next post we'll take a high-level look at drop target code.

Saturday, June 22, 2013

Prius Plug-In Hybrid: Some Numbers

In April 2012 we took delivery of a new Prius Plug-in Hybrid Vehicle (PHV), replacing a 2005 Prius. Now in mid-2013 I have enough history of utility and gasoline use to make a reasonable comparison of the relative costs of these vehicles.

Unfortunately for simple math, almost immediately after we got the PHV, we took off for six weeks in France. This mid-April to May gap messes up the pattern of utility bills and driving history. I had to choose two slightly mismatched periods of time for comparison: for the old vehicle, April 2011 through March 2012; and for the new one, June 2012 through May 2013.

This means that the electricity usage numbers are not precisely comparable. However, both periods span the darkest months of the year and also span one summer. So both include maximum lighting time and maximum home air conditioning. Nevertheless, all of these numbers should be taken as approximate.

Gasoline Usage

Twelve months of the 2005 Prius: 12048 miles, 251 gallons. That's 48mpg or, a more significant measure, 21 gallons per 1000 miles.

Twelve months of the 2012 PHV: 10790 miles, 170 gallons, yielding 63mpg or 15 gallons per 1000 miles.

Bottom line: given our usage patterns, the PHV saves us 6 gallons per 1000 miles driven. At current prices that's about $25 per Kmile, or given our normal 12K/year distance, about (ta-daa!) $275 per year in fuel costs.

Electricity Rates

Palo Alto Utilities charge for electricity usage based on Kilowatt-Hours (KwH) per month. Up to 10 KwH per day is called Tier 1, and charged at $0.09524 per KwH. Usage from 10 to 20 KwH is Tier 2, charged at $0.1302. Our latest bill, usage for May 2013, showed 487 KwH charged at $60.36. That is presumably based on

31 days * 10 KwH = 310 * 0.09524 =$28.5244
487 - 310 = 177 KwH * 0.1302 =$23.0454
total$51.5698
Actual bill$60.36

Hmmm. Think I need to have a talk with the Utilities...

Electricity Consumption

For the period May 2011 through April 2012 we consumed 5206 KwH, an average of 433.8 per month.

For the period July 2012 through June 2013 we consumed 6438 KwH, an average of 536.5 per month.

Thus the PHV seems to have added approximately 100 KwH to our electricity usage, or at Tier 2 rates, about $13 to our monthly electric bill.

The Bottom Line

Based on these somewhat approximate numbers, the PHV is saving us $275 per year in gasoline, while it is is costing us $156 in electricity. For a net saving of (ta-ta-daaaa!) $119 per year.

Since the PHV costs $7,800 more than a regular Prius ($32,000 versus $24,200 currently at toyota.com), it should pay for itself in only... 65 years. (Sad trombone: wah-wah-wah-waaa)

Side Issues

Our driving pattern includes a lot of short local trips. That's why we bought the PHV; its 11-mile battery distance means we often go several days without the gas engine coming on, and often the dashboard readout shows us getting over 100mpg well into a tankful. However, on longer trips and freeway driving the PHV does no better than a normal third-generation Prius, about 55mpg. Our battery-powered local hops pull the average up to 63, or about %15 better than a non-plug-in. But I've talked online with a PHV owner who does nothing but commute 6 miles each way, and is averaging over 160mpg.

I should also note that Palo Alto Utilities is pilot-testing a Time of Day Usage program. Under this scheme, electricity used between 11pm and 6am is discounted $0.019 per KwH. That would lower the cost of the PHV's 100 KwH (all Tier-2) from $0.1302 to $0.1112 for a saving of $2/month. Let's see: hiring an electrician to install some kind of timer in the outdoor outlet where we plug in the car would cost what, $250? So that would take even longer to earn out than the car itself.

Update: The PHV has a built-in charge timer! I can set it to charge itself only between 11pm and 6am. Thus we could realize the Time of Day discount without further expense. That would lower the electricity cost from $156 to $132 per year, increase the savings from $119 to $143 per year, and the PHV pays for itself in only 54 years! Yeah! I have registered for the TOD program but the pilot program is currently closed.

Saturday, January 12, 2013

Raspberry Pi, episode 1

Okey-dokey, I will repurpose this blog yet again to describe trying to use the Raspberry Pi that Paul gave me for Christmas.

There's this much one can say about the happy group of Brits behind the Pi: they are teaching many, many people how to spell Raspppberry.

The actual thing is basically a single-board Linux computer, remarkable mainly because of its tiny size: about the size of a playing card. Central on the board is a single chip that comprises 512MB of RAM, a 32-bit CPU, and a graphics processor. Pretty much everything else on the tiny board is either connectors or resistors and capacitors.

The Pi all hooked up. Clockwise from top right corner: Power via a micro-USB connector; the SD card; some experimenter I/O pins; a yellow RCA connector for an analog TV signal; an audio jack; some blinky lights; two USB plugs for keyboard and mouse; an ethernet cable; the HDMI cable leading to the digital TV.

When I stop to think about it the Pi can make me rather giddy. I remember configuring my first home computer, with separate, 10-inch-wide cards for a 16-bit CPU and for 64kilobytes of RAM. So here's one fingernail-size chip comprising a CPU and memory, both four orders of magnitude more capable. (64e3 versus 51.2e7, and as for the CPU, forget it. The Pi's graphics processor can do 64 gigaFLOPs. That old 2MHz Z80 isn't in the same galaxy.)

Anyway it needs a keyboard and mouse, so I went to Fry's and found an optical USB mouse for $3.99 and a USB keyboard for $5.99. Later I had to go back and get a 2-metre HDMI cable, which cost more than the other two combined.

Getting the OS

The big hurdle in getting the Pi going is loading its OS. The Pi's only provision for mass storage is a single SD card, 8GB or larger. (I try not to think too much about having 8 or 16 gigabytes of memory on a thumbnail-sized chip.) If the SD card is initialized as a FAT-32 file system and loaded with a disk image downloaded from the Raspberry mothership, the Pi will supposedly boot up from it into a full-blown Linux system.

That's what the quick start guide claims. But the problem is getting the disk image written onto the SD card. The Pi instructions are for Windows users; my tool is a Macbook. Nevertheless, I thought I'd done it correctly, following these instructions at the "Embedded Linux Wiki".

It didn't work; or at least, when the Pi was all hooked up and plugged in, the screen went blank.

So I did it again and this time, the system booted up into a typical Linux startup screen going to a config screen.

It's aliiiiive! The Pi with the $4 mouse, $6 keyboard, and an expensive Samsung TV doing duty as a monitor.

I config'd it a bit, setting the locale and the time zone, although the latter was a bit of a puzzle: is California in Alaskan or Aleutian or Pacific time?

A Desktop!

I thoughtlessly told the configurator to update itself, but no ethernet was plugged in, so that hung. So I popped the power in and out and this time when it came up, it was in a graphical desktop!

The desktop after playing around a bit: terminal, Midori browser, debian doc open.

Fall Down Go Boom

I played with the desktop for a bit. Impressive that the Pi had instantly found the LAN and the browser could access the web without problem. I started apt-get update to update the database of installed software, and after a bit, everything hung solid. The mouse pointer still tracked but nothing responded to it.

So after a few minutes I popped the power plug out and in and when it booted now, it said PANIC.

Oh dear oh dear oh dear...

One More Time Unto the SD Card

I put the SD card in my Macbook and reformatted it and copied the image to it again. Checked that the Pi would boot and started through the configuration screens again.

After setting the timezone, there was a long pause. Then a series of file-system messages about inodes, including "This should not happen!" and "Data will be lost" and other stuff. Not helpful messages. In no way suggestive of what to do or how to recover.

Oh dear, some more. The Life Of Pi is finished, it seems.

And then it didn't respond to anything.

Anyone want a Pi?

I conclude that the Raspberry Pi is indeed a Linux platform of amazingly small dimensions. Loaded up with dedicated device-control software it can no doubt do yoeman duty as a lump of embedded smarts. People are doing amazing things with it.

However it is not a toy. It ain't for kids, at least, for kids who aren't ready to type sudo apt-get update without thinking about it. It needs constant hand-holding by an experienced user of command-lines, and that user better have a lot of patience. The software, combined with the iffy hardware qualities of SD card mass storage, is just not reliable. Or at least, this one example was not. The user experience, even for a very knowledgeable techie, is full of frustration. For a non-nerd, it would be hopeless.

Wednesday, April 20, 2011

Changing from DirecTivo to the DirecTV HR24 DVR

For a long time I put off the switch to HD television because doing so would mean giving up our cherished, 8-year-old Series 2 DirecTivo unit. The HD digital video recorder from DirecTV does not use Tivo's patented and deservedly popular user interface.

Finally the wait got too exasperating and we made the switch-over: a new, beautiful Samsung 46C8000 TV, a new A/V receiver, and the DirecTV HR24 DVR. (I ordered the HR24 from Amazon to be sure of getting that model, and not the slower HR23.) This is a summary of the main differences between these DVRs, as seen by a long-time Tivo user. The bottom line is: it's fine, no big problems.

This is a comparison of the standard Tivo user experience, to the DirecTV HR24 experience, as seen by a long-time user of the Series 2 DirecTivo.

The Box

As a physical box the HR24 is much less conspicuous than the DirecTivo. It is smaller in all dimensions, and lighter. Its case is midnight black with a subtle blue-glowing icon. Its primary output is HDMI, a big forward jump compared to the DirecTivo which lacked HDMI.

The Remote

The remote is not as comfortable or intuitive as the Tivo "Peanut." On the other hand, it isn't possible to pick it up and try to use it upside down, as my wife sometimes did with the peanut.

A significant feature that is new to Tivo users: the HR24 has an "off" mode. The DirecTivo was never turned off; it was always producing audio and video output unless you went through several menu levels to put it in Standby mode.

The HR24 remote has ON and OFF buttons. When you turn it off, the HR24 clears out of any menus, cancels any paused recording, shuts off its video and audio outputs and dims its front panel. Obviously it remains "on" internally because it makes timed recordings. The ON/OFF buttons can be programmed to turn your TV on/off as well, so that the TV and DVR come on together. You can also program the remote to operate a separate A/V receiver, but not with a single button-press.

Something else different: when the HR24 has been paused for more than a minute, it goes to a screen-saver mode in which a DirecTV icon wanders around a black screen. The Tivo did not do this; it would sit on a paused image indefinitely.

You can get deep into nested menus with the HR24 interface. However the remote has a dedicated BACK button for backing out of all menus in reverse sequence.

An Interface for the 80s

The first thing the Tivo user notices on meeting the HR24 on-screen user interface, is the cosmetic differences. The Tivo interface used calm, deep colors and had a polished look created by rounded corners, drop-shadows, aliased fonts, and subtle color gradients.

The look of the HR24 interface is anything but subtle and far from polished. It uses garish light-blue, black, yellow and orange colors. Every menu and window is flat and hard-edged: there's not a gradient or drop-shadow anywhere. The only concession to appearance is crudely pixillated rounded corners on some rectangular elements. Fonts, too, are unaliased and chunky. It's a surprise that an HD DVR, which will always be used with a 1920x1080, 24-bit screen, still uses interface elements that would look natural on a 16-color, 640x480 DOS screen of 1982! Twenty-five years of interface design progress ignored.

Cosmetics aside however, the functions that you command through this retro UI are in almost all cases the equal or better than Tivo's.

Playing Recorded Shows

With the Tivo, I most often went to the What's Playing List. Tivo provided a List button to get there, and so does the HR24. When scrolling through this or any other list, you can use Channel up/down to scroll by pages, just as with Tivo. Select a recorded program and press the Play button to start playing it, or hit the Red button to delete it.

A nice feature of the HR24 is a display of disk capacity used. Tivo lacked this feature.

Watching a live or recorded show, the controls are basically the same as Tivo. Play, pause, fast forward, rewind and jump back a few seconds are all similar. The FF button has four speeds, not three. Compared to Tivo the third speed is too slow and the fourth is uncomfortably fast. When you stop fast-forwarding, the HR24 jumps back a bit to compensate for reaction time, but it jumps further than Tivo did, and less predictably.

Tivo made single-frame advance and slo-mo quite easy. It is possible to engage slo-mo with the HR24; I have not found how to do frame-advance. On the other hand, the HR24 lets you set "bookmark" points in a recorded show so you can easily skip back to replay a favorite scene.

One Tivo play feature not easily replaced is Tivo's go-to-end button. On the HR24, the jump-ahead button doesn't go to the end. When playing at normal speed, it skips 30 seconds. If you are at any FF speed it skips 15 minutes. You can force a recording to its end (or to the real-time point if it is still recording) as follows: start FF at any speed, then press the skip-ahead button enough times to bring the recording to the end.

On Tivo, you could bail out of a program with the two-button sequence go-to-end, List. This was a quick way to bring up the Delete yes/no dialog. On the HR24, the same result is obtained with FF then skip (skip, skip...) to the end, and wait a few seconds until the delete dialog appears.

On DirecTivo, deleting a program moved it to a special folder from which you could recover it, sometimes days later. There is no such "trash can" folder on the HR24. A deleted program is gone immediately and forever.

Upcoming from Info

The HR24 has a dedicated INFO button that you can use to get info about a program at any time: while in a list of recorded programs, or while playing a recorded or a live program, or while browsing the on-screen guide. The displayed info includes a "first aired" date, useful for knowing when a program is a rerun.

Something that I always wanted in the Tivo interface was the ability to move easily from info about a recorded show, to a list of upcoming episodes of that show or to the Season pass manager. The HR24 provides this. The info panel brought up by the INFO button has a short menu of things to do, including finding upcoming episodes of the same show, finding other shows with the same performers, setting up to record the series, or going to the series options if the series is already being recorded. Again, you can do this while playing a program, or from the guide, or from search results, or the list of recorded programs. It's very handy.

Search

The search function of the HR24 is more accessible, more elaborate, and more useful than Tivo's Wishlists. You enter text in a similar way, by navigating a matrix of characters. But as soon as you begin entering text, search results begin to appear -- as with a Google search. The suggested results are quite good. For many searches, as few as two characters will produce the thing you want in the list of tentative results.

The most common search is for the name of a show, but you can also search a channel name (e.g. ESPN) to get a list of shows on that channel; or search a person's name and see all shows in which that person appears; or search a category (e.g. REALITY) and see all shows in that category. Of you can do a keyword search, finding every program whose listing contains that keyword. Once you have a list of shows by any means, you can browse in it, and hit Record or INFO on any one. You can also set "Auto-record" to record all shows that match a particular search, similar to Tivo's auto-recorded wishlists.

The last 15 searches you've done are available in a list so you can easily repeat them. The search also supports Boolean AND, OR and NOT functions and other special keywords. These are not documented in the user manual; you learn about them in online forums.

Performance

On internet forums there were many complaints about sluggish response of the previous DirecTV DVR, the HR23. The HR24 is apparently much faster. It is as responsive as the Series 2 Tivo in almost everything. The only place I have noticed any sluggishness is in the response to the RECORD button. Sometimes it takes a couple of seconds to respond to this button. That's awkward because, if you press the button twice, you have requested recording the entire series of that program. So you learn to press once, firmly, then wait for the ® icon to appear.

Missing Suggestions

The only major Tivo function that the HR24 lacks is Suggestions. I miss my old weekly exercise of sitting down to go through 80 to 100 programs in the Suggestions folder, deleting the majority but intrigued by some. Suggestions should be easy to implement; the algorithm would be just like Amazon's "Customers who bought this also bought X" feature. DirecTV could easily tell my DVR, "Customers who record the programs you do, also record these other programs, grab a few if you have space." I don't see why they don't do this.

No Regrets

I put off converting to HD and the DirecTV DVR for a long time out of reluctance to give up the Tivo user interface. Now that I've done it, the new environment is quite comfortable and usable. If a high-def DirecTivo ever does appear, I won't be in any rush to change back to it.