February 29, 2004
Location, Location, Location!
by peterbThis is the first in a series of articles examining videogames and what makes them fun.
I enjoy driving games, all kinds of driving games. Driving games feed into my love of auto racing, and of cars, and put an entire class of vehicles that I can't afford in real life into my hands. So I tend to play them a lot. One class of games is the "tune up" game: you don't just acquire cars, but actually have to go buy aftermarket parts and adjust the shocks and choose the right tires, etc., to get your car in shape to win races. Two examples of these are Gran Turismo 3 for the Playstation 2 and Sega GT 2002 for the Xbox.
From a pure driving perspective, Sega GT is the better game. The way the cars handle is more realistic; the change in the way the cars move when you tune them feels more correct. And the graphics are better, too. So are the controls. Yet every time I want to play a "tune up" game, I reach for Gran Turismo 3. Every single time. Sega GT, which delivers a "better" driving experience, sits on my shelf, unplayed. Why?
Because with Gran Turismo, I can drive at Laguna Seca. I can drive -- virtually -- around a real space, a space that existed in my head before I bought the game.
In other words, a sense of place can be an important element of the enjoyment of a game. For some of us -- and I'm one of these people -- it's not only important, it's crucial. Turning again to Sega GT, the game is full of a number of interesting, cleverly designed, well-balanced tracks that allow for great racing. But if you make me choose between "drive around a perfect race course that seems to be placed in a generic-yet-unnamed European city" and "drive around a deeply flawed race course that takes place in Rome," I'm going to choose Rome every time. Look, there's the Forum. There's the Colosseum. There's the Baths of Caracalla.
Microsoft Games learned this lesson well; Project Gotham Racing took place in New York, Tokyo, London, and San Francisco. Project Gotham Racing 2 -- which is, for my money, simply the best pure street racing game ever made -- takes place in a raft of other cities, including the iconic Washington, DC, speedy Stockholm, atmospheric Barcelona, and the most absolutely perfect virtual realization of Florence possible, among others. I usually like the courses that take place in cities I've been in; I don't think that's an accident. (Dear Microsoft: really, Pittsburgh would be perfect for PGR3 -- come visit!)
For me, this goes beyond the mere representation of a city, and even gets into naming. If Sega GT had a Rome track which had the Colosseum, and the Forum, and all the streets I've driven, and street vendors selling chestnuts on wet March days, and little enoteche where you could get a great wine for a few euros that even if you can find it when you get back Stateside still won't taste as good, if Sega GT had that track and then called it EUROPE 03 instead of "Rome," I wouldn't like it, in the same way that I don't like it in GTA3 how they make up names for the cars like "Inferno" instead of having a Ferrari. There's a semiotic issue here, somewhere. At the very least, there's an issue in my head, but I don't think I'm alone in this.
The tricky thing is that this issue of labeling, of naming, of mapping a videogame model into the player's model of a real-world space, has nothing to do with games qua games. Any game has somewhere deep beneath it a set of abstract rules that describe it; all racing videogames are fundamentally the same when stripped of details: move your piece in a circle, and finish first. But those details, location first and foremost among them, are what give games the element of fantasy, narrative, and drama that separate them from mere mental and physical exercise. The location doesn't even have to be a literal real-world location. Very little separates Day of Defeat from most other first person shooters except for the fact that most of us have seen the first battle scenes in Saving Private Ryan, and so when we storm the beach in Normandy in the game it points to our memory map of that location, which was in turn given to us by the movie, which pointed to a place that, in some sense, doesn't exist any more
I was talking to Andrew Plotkin about "platform" games (eg, the Mario games) the other day, and I mentioned that my new technique for dealing with how stale the genre has become is that I'll play a platformer until I reach the first "lava level." "As soon as I reach the lava level, I know that the designers are completely out of ideas, and there won't be anything else interesting in the game." He told me that his understanding was that as a general rule, the plot for these games is only written long after development has begun, so it's standard operating procedure for early versions to incorporate a lava level, a water level, etc. If true, this is one example of what separates videogames, as a form of mass entertainment, from movies. Filmmakers -- well, successful ones, anyway -- understand that the location of a movie can be as important as the main character.
Only when it's unheard of for a video game to be designed without its location and mise-en-scene being one of the first things considered, rather than one of the last, will it even begin to make sense to talk about the videogame medium being as mature, artistically, as cinema.
Additional Resources
If you liked this article, you might be interested in some of these links:
- The next article in this series.
- Andrew Plotkin (aka Zarf) writes Interactive Fiction games.
- On creating virtual cities for Project Gotham Racing 2
- Essay on computer game spaces reduced (PDF)
- Day of Defeat's maps are signifiers pointing to places we know even though none of us have been there.
February 27, 2004
What Programming Language?
by peterbI hear this question a lot, typically from kids who have just discovered that there's more to a computer than a web browser, and who are curious about where to go for here. The people who ask this question typically don't have any specific project in mind; if they did, it would be a lot easier to answer them. Instead, they're really saying "I don't know much about programming, but I think it might be kind of fun. Where's the best place to start?"
An experienced programmer on a dark day might answer the question "What programming language should I learn?" with "None. Learn to play a musical instrument instead." Today is not a dark day, however, and I'll do my best to answer it.
The person I'm addressing this article to isn't the person that has as their objective "I want to write a Windows application" or "I want to write a GUI for Bittorrent on Linux" or "I want to write a little tool that will run on my Mac to talk to iTunes." The target of this article is someone who has a general desire to become a software developer but doesn't yet understand how to get there.
So in that context, the high level, vaguely accurate answer to the question "What programming language should I learn?" is "It really doesn't matter." Partially this is because the question is ill-formed. The specific language one uses is mostly orthogonal to developing the skills one needs to be a good software developer. Let's look at what those skills are, first, and then later we can come back to the question and make some actual recommendations, rather than just rejecting the question.
High-Level Skills
A developer needs to be able to describe a problem to be solved. She needs to be able to break the problem down into smaller, easier problems. She needs to be able to describe a set of conditions that constitute solving the problem. She needs to be able to think of tests that determine whether a given program or part of a program are correct.
Those are the sorts of skills every good developer has regardless of whether they're writing code for themselves or for the marketplace. If a developer wants to work on a project with other people, be it open source or commercial, she needs additional skills. She needs to know how to find and read documentation. She will need to know how to use an Applications Programming Interface (API) that someone else has provided. She will need to be able to know when to use an already-existing API ("almost always") versus developing a new API ("almost never"). She will need the discipline to not be constantly reinventing the wheel. She will need to know how to write documentation. She will need to understand what makes code maintainable (correct and adequate documentation, proper use of namespaces, consistent formatting, useful comments), and to actually use that knowledge when she writes code.
All of those skills apply no matter what your weapon of choice is in terms of language.
Classifying Languages
There are three rough categories of languages of interest to the modern programmer: imperative languages (sometimes you'll hear these described as procedural) such as C, C++, Java, Pascal, and Modula-3. Imperative languages are about providing a sequence of commands for the computer to execute. In functional languages such as ML, Lisp, and Scheme programs aren't so much executed as they are mathematically evaluated , as with the lambda calculus. And scripting languages such as Perl, Python, and Tcl which allow for rapid prototyping of simple tasks. Note: yes, I understand that there's no academic reason to separate scripting languages from their imperative compiled brethren, but there are practical reasons which I'll discuss later.
When programming almost any (modern, useful) language, you're going to find yourself using a variety of directives. Some will be part of the core language specification: in C, integer arithmetic and assignment will be the same on every platform. Others will be part of a library that is likely to exist on any platform your program runs on (eg, stdio in C). Lastly, there are APIs which are platform-specific; a C library routine to open a dialogue box on a Win32 OS would be an example of this.
Specific Cross-Language Skills
Learning to program skillfully is something that comes only with great time and effort. There are plenty of programs that will compile just fine and run correctly that can still be called "bad programs," in the sense that when you look at the source, it is clear that the author doesn't have a grasp on some fundamental concept. The three little pigs built houses of straw, sticks, and brick. All of them served just fine as shelter, but only the brick house was strong enough to withstand the wolf. Your goal should be to not just learn how to make your programs run, but how to be confident in their correctness, robustness, and performance, before you've written a single line of code. To achieve that state, which may seem like a paradox, you need to understand the concepts underlying the craft of programming.
A shorter way of saying this is: don't worry about learning the syntax of a language. Don't concentrate on it. Don't spend time worrying about it. Learning where the semicolons or parentheses go will come by itself, as you write code and go through compile-run-test cycles. Look that stuff up when you need to, but understand that "learning what a constructor in Java looks like" is, in the long term not a valuable thing to concentrate on. Learning what a constructor is and what it does is a valuable thing to concentrate on.
So that's what you shouldn't focus on: syntax. What should you focus on? Here, in a vague sort of didactic order, are concepts that I expect a skilled programmer to understand regardless of the language they are using -- even if the language they are using at the moment doesn't actually support the item in question.
- What a variable is.
- How variables are typed. Why type is important.
- Scope (Lexical, Dynamic).
- Assignment.
- Basic data structures.
- Basic control structures. Conditionals.
- Pointers.
- Dynamic storage allocation. Garbage collection. How to manage memory if your language doesn't have GC.
- Linked lists. Hash tables. Btrees.
- Iteration. The off-by-one problem. How to avoid it.
- Recursion. When to use it. ("almost never").
- Basic algorithms -- sorting, searching, etc.
- Categorizing the run time of a piece of code (O(n), O(n^2), etc).
- Debugging techniques, from diagnostic prints to using a debugger.
- Assertions and how to use them properly.
- Basic object oriented programming concepts (inheritance, encapsulation).
- Threads. Typical concurrent programming mistakes, and how to avoid them. The producer/consumer problem.
- The difference between locking a mutex and waiting on a condition variable.
- Synchronous vs. asynchronous operations. Callbacks.
- Event-driven models.
- Exceptions. Strategies for handling errors generally.
- Advanced testing strategies. Fault injection.
Any one programming language you choose, unless you're really going out of your way to be obscure (e.g. Prolog), should get you through at least half of that list. Then you can decide whether to stick with the language you started with for the rest of the list, or start on a new one to fill in the gaps.
Practical Considerations
Programming is fundamentally a "learn by doing" activity, so your language choices are somewhat constrained by the need to use a language that can actually run on your operating system of choice. This still leaves you with a fairly wide set of options.
Language availability isn't the only practical consideration. If applicable, is there a debugger for the language you want to use? What set of libraries will you be using? How much documentation is available for the language you want to learn? Is there an active community you can turn to for help?
Enough Already. Just Tell Me What to Learn!
OK. I've tried to make the point here that whatever languages you decide to learn, you should be able to develop your skills over a period of years such that when you decide to learn a new language, it will just be a trivial matter of absorbing the new language's syntax. However, in the interests perhaps of sparking debate, I'll give my own personal opinion on teaching languages. Nothing in this section should be construed to mean that I'm saying that languages other than the ones I'm recommending aren't any good (except Modula-3. Modula-3 isn't any good. I'm saying that.)
First, learn a compiled imperative language. I very much like Java as a teaching language. I have a few reasons for liking Java. In addition to being somewhat cross-platform (cue mocking laughter), it is actually a fairly elegant language with a robust, extensive, powerful and most importantly for the novice, well-documented set of APIs. One of the things I like about Java as a teaching language is that it's always very clear, because of the namespace design, when you're using a "built in" command versus when you are calling some library API; I've seen novices using C and C++ get confused when the distinction wasn't as clear to them. There are many resources to help the novice Java programmer get off the ground. A student learning Java can learn about Object Oriented programming concepts, threads, events, dynamic allocation and garbage collection, advanced data structures, and most of the items on my list above. The fact that Java runs in a virtual machine is both a benefit and a drawback -- it will probably slow your development of understanding how these high level data structures map to the architecture of the machine you're on, but you can always go back and learn C later. Also, the syntax of Java is simple enough that it won't pollute you or ruin you for other languages (the way, say, Objective C would).
And in the interests of disclosing any bias on my part: I don't program in Java on a day to day basis. My day job is all C, all the time.
Next, learn a functional language. Lately I've been toying around with OCaml and ML, and I like them, but really its hard to go wrong here. Lisp, Scheme, ML -- these are all fine choices. I haven't examined it yet myself, but I've been told that Microsoft Research has a neat language called F# which is basically a version of OCaml that can call .NET library functions. That's pretty tempting, because it has such an aura of the forbidden about it -- take a pristine, educational, not-very-useful-in-practice language and turn it into something that can be used to develop Windows applications. Mmmmmmmmmmmmm, forbidden transgressive language.
Uh, what? What was I saying? Um, yeah, F#. Very bad. Don't use that! It's morally wrong. Learn Standard ML. Yeah.
Lastly, learn an interpreted scripting language of some sort. I like Perl, but Python has a big following, too. People will tell you that scripting languages are just as powerful as compiled languages. They're right. But it's been my personal experience that because of the environment scripting languages grow up in ("Oh my gosh, I have to change all occurrences of this string in every file on the server within 10 minutes, or I'll be fired") the idioms in common usage aren't as carefully thought out. Shortcuts are taken. Error cases are punted. Sloppiness is rampant. I agree fully that this is more of a cultural matter than anything intrinsic to the design of a language, but I can't ignore the reality, and that's how I see it.
If you're an experienced developer and you'd like to chime on this topic, please feel free to comment below. The only thing I ask is that you keep in mind that the topic is languages for learning the craft of programming, not languages for best accomplishing a specific task. Thanks.
Additional Resources
- The article that inspired me to write this when I came across it: Peter Norvig's Teach Yourself Programming in Ten Years.
- I recently wrote an article on why we use pointers. You might like it.
- My favorite Java book is Peter van der Linden's Just Java.
- Microsoft Research's F# project.
- Wikipedia entry for the lambda calculus, the formal system behind functional languages.
- In the interests of balance, the author of CVSup explains why Modula-3 is great. (But he's wrong).
- The Skeptic's Guide to Objective C.
February 26, 2004
Formula 1 2004 Season Preview
by peterbFormula 1 is not a sport in decline; it's actually in a full-fledged plummet. Addicted to tobacco money, scouting race locations in such dynamic and interesting places as Bahrain and Hyderabad, trying to be a fan of Formula 1 nowadays is like trying to give emotional support to your crack-addicted second cousin who is constantly scaring young girls in the supermarket parking lot but somehow still manages to think that they're just intimidated by his great looks.
What do we have to look forward to this year? Fewer European races. More stupid rules. Drivers that can't wait to escape from the teams they are driving for. Where to begin?
Rules Changes
Friday qualifying is gone; now there are two sessions of (single-lap) qualifying on Saturday. I'm leading with this item as it's the only positive development. Unlike everyone else in the rest of the world, I actually like the single-lap qualifying. It adds some drama to know that it only takes one mistake to cost a driver the pole.
Rear wings can now only sport a maximum of three elements, in an effort to reduce the aerodynamic grip on the cars. This makes the cars go slower through the corners, which is a key element of the FIA's "make the race as boring as possible" master plan.
One rule that some commentators think will help the slower teams is the "one engine rule": if a team changes their engine between qualifying and the race, they will drop 10 places on the starting grid. The theory is: the Minardi team is always last, therefore they can change engines without losing any places, and will get a reliability advantage that will help them do better. The problem, of course, is that in racing "reliably slow" is not really much better than "unreliably fast" for values of "unreliable" under "guaranteed to explode in flame"
Drivers
Wanna-be footballer and 6 time world champion Michael Schumacher, Inc, is still the lead driver for Ferrari, and is scheduled to win the championship once again. Don't look for any surprises here. About once a month throughout the season various F1 online magazines will post articles with headlines asking "Can Anyone Beat Schumi?" At the risk of spoiling the season, I can reveal that the answer to that question is "No." At times, people will propose various theories as to how and why Michael might manage to lose. Perhaps Bridgestone's tires will fail to be competetive with Michelin. Fernando Alonso will develop further and be able to challenge Schumacher in every race. A meteor will fall from the sky and annihilate the Ferrari paddock. None of these things will happen. Ferrari will dominate again, and despite what many people wish, he's not about to retire.
Now. Juan Pablo Montoya is still driving for Williams/BMW. He doesn't want to drive for Williams, and Williams doesn't want him to be driving for them; McLaren/Mercedes has offered him a lot of money to drive for them, and he wants it, but Sir Frank Williams (who once put square tires on his 1988 F1 car because he heard a rumor that McLaren was developing Square Tire Technology), has decided that he has to hold on to the driver he doesn't want because someone else wants him. Until next year. With me so far? Williams instead wants Ralf Schumacher to be their main driver, except they don't want to pay him what he wants, and so he doesn't want to drive for Williams either. My understanding is that Williams/BMW's current plan is to actually detonate explosives under their cars at the start of the first race this year, killing both Juan Pablo and Ralf. They will then be able to complete the rest of the season with robotic drivers.
McLaren/Mercedes, meanwhile has issued a press release saying that until Montoya arrives in 2005, their drivers will be young Finn sensation Kimi Raikkonnen and "that other guy whose name we can never remember. The Scottish guy. You know. With the chin."
There are a few drivers that have been 'promoted' from Formula 3000 this year, but since none of them is my personal favorite, American Townsend Bell, I will snub them by not discussing them here. Fear my awesome power.
Tracks
Bahrain paid Bernie Ecclestone a kerjillion dollars to arrange for a race there, as did China. F1 is continuing to flee Europe as various European countries outlaw tobacco advertising. Really, this is a losing proposition. While people in Bahrain are enthused about this, it marks yet another step away from the traditional, sprawling Grand Prixs of Monaco, Nurburgring, and Spa towards the modern soulless closed-track in the Malaysia spirit. Eventually, we'll end up only having races in Chad, Peru, and Micronesia. F1 needs to kick the tobacco habit: it's not helping the sport in the eyes of fans, although I guess it is helping pay for those extravagant garages. Bahrain and China take the place of France (which is scrambling to slip onto the schedule anyway, although they probably won't manage to raise the money) and Canada, always one of my favorite races. Thanks, Bernie. You're a real mensch.
Conclusion
This would be a good year to start following the WRC World Rally Championship instead of Formula 1.
Additional Resources
I find the following resources useful when looking for F1 news:
- GrandPrix.com
- PlanetF1.com
- FastMachines covers all sorts of racing, including the increasingly ridiculous Formula 1
- ...as does RacingLive
- The BBC's F1 page gives you the major media rundown.
- The Scottish guy. You know. With the chin. Even abandoned by his web provider.
February 25, 2004
"The Bomb Has Been Planted"
by peterbMy friend Jon plays board games. Of the hundreds of games he plays, there is one -- Advanced Squad Leader -- that has the distinction of simply being referred to as the game. For many years the first person shooter Counterstrike was my the game. It has recently been re-released for the Xbox and I've been playing it lately. It's rapidly becoming the game once again.
Counterstrike is a team-based first person shooter: "terrorists" versus "counterterrorists." Cops and robbers, with objectives. In any given mission, either the terrorists want to blow up some site and the CTs want to prevent them, or the terrorists are trying to hold hostages and the CTs are trying to rescue them. The game was made before 9/11, so "take a bunch of hostages and then kill them, along with yourself," didn't make it into the mission objectives list. And in perhaps the only nod to how the world has changed, the "terrorists have taken over a 747" map is not included in the game. I can't say I blame them for that particular change.
"Enemy Spotted"
Counterstrike was revolutionary when it was released. Many of the innovations it introduced (or at least popularized) have since rippled through to other games in the genre, such that it's easy to forget how stale and rigid the FPS genre was at the time; perhaps the only game that comes close to being as innovative as Counterstrike was the Team Fortress mod, albeit its creative impulse moved in a different direction.
What are the attributes that make Counterstrike what it is?
- Mission objectives are finite. You have 5 minutes per round; either you bomb the site or you protect the site.
- Death is permanent, within a round. If you die, you sit out the rest of the round. That sounds obvious, but it isn't -- Quake "capture the flag" games generally have constant "respawning" of players going on, so the only consequence of death is a brief interruption and loss of weapons. In CS, your death can make the difference between your team winning or losing.
- Weapons are lethal. It's pretty typical in Quake for a player to be able to take three shotgun blasts to the face, narrowly get caught in the blast radius of a rocket, and take innumerable pistol shots to the chest, and still be able to keep going, especially if he finds a 'health pack'. In Counterstrike, you're in the field. There are no health packs. If someone shoots you in the head with a pistol, you're probably going to die immediately.
- Loving, almost pornographic attention to detail in weapons. Most other games have one platonic representative of each weapon. "The handgun." "The rifle." "The shotgun." In Counterstrike, you can choose between six pistols (should I take the monstrously overpowered but low-capacity Desert Eagle, or the underpowered but quick-firing, high-capacity Glock 9mm?), a bunch of submachine guns, a number of automatic rifles (some with sniper scopes), two types of shotgun, and a variety of armor and grenade options. The author of Counterstrike clearly took his copy of Jane's Infantry Weapons to bed with him every night.
"Hold This Position"
When I heard the game was being released on the Xbox, I rolled my eyes and dismissively waved my paw. Shooters are meant to play with a keyboard and mouse -- there's no way that using two joysticks can even come close to the precision one gets with a well-tuned mouse. I eventually caved in and picked it up because a number of my friends were playing every night, and I wanted to play with that group.
I was right: it's nothing like playing with a mouse. It's completely different. And, to be perfectly honest, it ain't that bad.
It is different. All of my elite keyboard and mouse based skills are meaningless here, of course. The game has a slightly slower pace than on the PC -- aiming takes a bit longer, and it's no longer possible to just whip the mouse around and change directions in an instant (there's actually a control to do exactly that, but it is practically unusable, along with the 'move quietly' control, which is currently my biggest gripe with the game.) Weapon purchasing and selection is easy and intuitive. The core of the game is exactly the same as the Windows version.
One thing that goes a long, long way towards making up for the loss of the mouse is the consistency of the experience. Yes, it's a bit of a burn that I can't aim as well as I used to, but I know that everyone is using the same controller -- everyone shares that handicap. Since the game is going through the Xbox Live service, my confidence that the people I'm playing with aren't cheating is extremely high (except perhaps for this guy). And everyone has headsets. Oh, the joy of the headset.
Yeah, yeah, I know. "You could buy a headset for your PC!" But you know what? I didn't. Neither did anyone else. Everyone using Xbox Live, for the most part, has the headset, and it brings a new dimension to the game. There's something gloriously ominous about hearing someone on your squad say "Uh oh, they're in the--" and then break off as he is brutally cut down in a hail of gunfire.
You can play team vs. team, of course, but the game ships with the ability to have you battle against "bots," or computer controlled opponents. The AI on the bots is pretty good on the higher levels; they seem to know to work as a team. In other words, If we're playing 2 humans against 2 bots, the humans almost always win even against the best bots. If we're playing 8 humans against 8 bots, the bots routinely clean our clocks. They have the basics of Counterstrike-specific tactics down, too -- they use grenades intelligently, will camp near the bomb after they have set it, and will go back to the hostage rescue point to ambush you if the CTs penetrate their defenses on the way in. I like the humans-vs-bots mode because, again, it introduces an innovative new feel to the game; an aggressive, violent, player vs. player game morphs into a somehow more cooperative experience. I don't know why that is -- maybe it's all in my head -- but that's how it feels to me.
My criticisms of the game are few and far between. Probably the most annoying aspect is that your options for configuring scenario selection are fairly primitive; you've basically got a choice between "randomly choose between all available maps" or "keep playing this map over and over." Load times between maps are higher than I'd like, but that was true in the Windows version of the game as well. Some of the best maps -- notably the "Assault" map -- are missing in the Xbox version; that's compensated somewhat by the presence of some new, superb Xbox-only maps.
If you have an Xbox and you like (online) first-person shooters, you should get Counterstrike.
Additional Resources
- The Counterstrike web site.
- Jane's Infantry Weapons
- psu discovers that gaming is like crack.
- The almost but not quite as good Team Fortress
- Someone truly pedantic might be tempted to point out that Counterstrike derived some of its features from Action Quake. While this is true, nobody actually played Action Quake.
February 23, 2004
Final Cut Pro: Why Log Clips?
by peterbFilmmaking is a creative process. One of the exciting things about editing on a computer rather than with traditional video or film editing machines is that we are free to try new techniques in a comparatively risk-free way. Because of this freedom, I personally found it a bit jarring that Final Cut tries, in subtle ways, to channel the user into doing what I considered to be annoying bookkeeping when capturing video from tape. Specifically, Final Cut tries to encourage you to log your clips rather than just capturing them.
It took me a month and a large project to be come face to face with the problems that you invite when you don't log your clips. Now, I understand why the authors of Final Cut push us this way, and I'm a believer. Except for the most trivial of projects, always log your clips. Let's have a brief discussion of what it means to log clips, what the process is for doing it, and most importantly, why you should log clips.
Capture Techniques
A clip is the basic unit of video (and, if applicable, audio) in Final Cut. Clips can be divided into subclips or built up into sequences. Final Cut offers three ways to capture clips: "capture now," "capture clip," and "batch capture."
Capture now is the simplest of the three. Users migrating to FCP from iMovie or Final Cut Express 2 often want to use this mode, because it seems the most analogous to the capture workflow in those tools. Click the "capture now" button, hit "play" on your camcorder or VTR, and Final Cut will begin capturing the video until you hit the escape key, up to a maximum of 30 minutes of video. Capture clip involves logging a single clip, and batch capture involves logging a bunch of clips and then telling FCP "Go capture these clips now." To log a clip, you tell FCP at a minimum the name of the tape or reel the clips are on and the starting and ending timecode of each clip. You can optionally provide scene or take names; if you don't provide them, FCP will pick names for you, along the lines of "clip-1", "clip-2", etc.
Why Should I Log?
So why not just use "capture now" for everything? If I'm willing to live with the 30-minute-per-chunk limitation, isn't it less work than doing all this logging stuff?
Well, no. If you use "capture now," you are limiting your ability to use some of Final Cut's most powerful features. The 30 minute limit is just the first subtle pressure FCP puts on you to avoid the use of capture now. There are other pressures, too: unlike iMovie and FCE 2, FCP won't do the magic "clip separation" where it detects where you paused and unpaused the video camera and splits those clips into separate subclips for you. (Reader Bjørn Hansen correctly points out that you can use the "DV Start/Stop Detect" function in the "Mark" menu to do this splitting after the fact, and then make your subclips into independent master clips to approximate the FCE/iMovie experience. I personally have had issues with master/affiliate clips where FCP behaves unintuitively -- for example, you delete a subclip or a 'duplicated master,' and a whole bunch of media that you didn't expect to go offline disappears, so I avoid this technique).
When you use "capture now," you end up with one big glob of video and audio data, and no metadata other than what you add after the fact (in the initial revision of this article, I claimed that this made it impossible to work in OfflineRT mode, but reader Tom Wolsky pointed out that I am mistaken). The lack of metadata is a problem in larger projects: I have an interview project which spans 5 DV tapes. All the interviews are with one subject. Frankly, I have no idea which segments of the interview are on which tape, other than through logging. Had I used "capture now" instead of logging the clips, when I wanted to reconnect media (either for offlineRT work or because I deleted media to conserve disk space), I would have to manually start looking at all the tapes to figure out which one I needed before recapturing. Sure, I could keep a page of copious notes attached to every tape, but avoiding that sort of drudge work is why I'm using a computer. If I log my clips, when I need to recapture, FCP prompts me to insert tape "interview-daytime-4", I find my clearly labeled tape on the shelf, and I'm done. I think that's worth something. "Capture now" is a workable solution if you always know exactly what scenes are on what tapes. I don't; I prefer to let the database in Final Cut track that information for me.
Tom Wolsky still thinks I am being too hard on "capture now," and he has written books about Final Cut Pro, so you should probably listen to him, and not me. Tom's point is that even if you use "capture now," you are still (morally) obligated to actually log information about reels, etc, and so really it's no different than using "capture clip" or batch capture. I both agree and disagree with Tom -- I agree that if you log carefully, the use of "capture now" is fine. My concern is that that path makes it too easy to say "Well, I'll capture now and then log later" and then you skip the log later part, and now you're in a world of hurt. I think this is especially true for people coming to FCP from the iMovie world, who are less likely to understand why one should log clips carefully. So my personal rule is to log them beforehand.
So this is one reason we log clips: Our tapes have timecode, the timecode never changes, so if we tell Final Cut what clips a project contains in terms of timecode and tape rather than in terms of "grab this clip", we can recover. No matter what goes wrong, no matter how badly we screw a project up, at least if we have a list of logged clips and an original tape, given a backup of an edit list file that takes up just a few kilobytes, we can recover a lot of our work.
Another reason we log clips is it allows us to offline clips with impunity. If we are running low on disk space or memory, we can edit a project in OfflineRT mode at a lower resolution and increase the responsiveness of our machines. Or we can simply chose "make offline" and delete clips that we're not actually using at the moment, knowing that when we want to work with them later we can just slap the tape in the camcorder, hit "reconnect media," and go get a cup of coffee while FCP does the drudge work for us.
One final reason I want to suggest is that logging clips can actually help the creative process by giving you what amounts to a pre-edit winnowing. If you're anything like me, you shoot too much material. Not being Orson Welles, I very often shoot from the hip. Sometimes I go into a project not having a plan, but just say to myself "well, I'll shoot way too much and then edit it down later." Logging your clips gives you a chance to look at your work in raw form and make the easy choices before devoting time and disk space to capturing it.
Typically, if I'm capturing from a 60 minute tape (assuming it's on a project I haven't organized carefully beforehand -- there are of course exceptions), I'll typically find only about 20 minutes of material worthy of actually capturing. Those 20 minutes get captured. I will also certainly winnow further while editing online, of course, but that first step gets a huge amount of material out of my way. That frees me up to look more closely at the material that was actually worth working on without getting distracted by footage that I knew was garbage to begin with.
How to Log Clips
Everyone has their own workflow for logging clips. I'll share mine here. Generally, I'll sit down with the camcorder or a monitor and a pad of paper and a pen. I generally don't do this first phase at the computer, because otherwise I get tempted into making edits and moving too fast. Also, my tiny simian brain is easily distracted by shiny things, and the Final Cut Pro GUI is very shiny. By working with just the video and a piece of paper, I'm able to focus all my attention on the content. I'll play the tape and start taking notes on what timecodes correspond to logical clips in my mind. It's fine to be approximate here - rounding to the nearest second or two will be fine.
When I'm done watching the tape, I have a handwritten list of timecodes and names of clips. I then take those, choose "Log and capture" from the file menu, and start logging the clips. I'll generally log even the clips I know I won't capture, just so that I have the record preserved.. Some people might find the first 'offline' viewing to be intolerable; I find it helps keep me focused on the content and not the UI of my editing program. It's entirely possible to do your first cut online and log clips directly into FCP as you see them. Some of the URLs in that follow this article give a good explanation of how to do this.
Last Words
Now you've logged your clips, captured what you want, and you're ready to edit, right? Wrong! There's one more thing you should do:
Pick up your carefully labeled tape, flip the write-protect tab to 'read-only', and put the tape away. Don't use it again. Once you've logged clips from a tape never write to it again. All it takes it one slip of the finger to turn your carefully collected logging data into a worthless pile of junk. Sure, it means you have to buy more tapes. A tape costs $5. Your time is worth much more than the cost of a tape. Put the tape away.
And, lastly, don't forget to back up your project files frequently.
I hope you've found this article useful. When I first started using Final Cut, I found lots of material explaining how to log my clips, but not really any in-depth explanation of why I would want to do so. If you've found this article to be useful, feel free to let me know. Likewise, if you see any errors or inadequacies within, I'd like to hear from you so I can correct them.
Additional Resources
- Logging and Capturing Video is a useful training course at Berkeley with step by step instructions for logging and capturing with final cut, using different methods.
- The LA Final Cut Pro User's Group has another tutorial in a slightly more informal, chatty style (but strangely uses exactly the same images as the Berkeley course)
- Editing Offline in Final Cut Pro
- Tom Wolsky's Final Cut books
February 21, 2004
Ceci n'est pas un manifesto
by peterbIt's time to make what, I hope, will be the only self-referential post on this site. It's time to specify the rules of what I'm going to be writing about here
I probably shouldn't do this at all, since it intrinsically violates some of the very rules I'm going to lay down, but I feel like I have a few things to get out of my system. If I can get those things out on paper now, once and for all, I will have a document that I can refer to later when I have the urge to publish something stupid. I will read that document -- this document -- and say "No. Don't do that. That violates the rules."
This isn't a manifesto, because a manifesto is about telling the public, or the reader, something, as is shown by its derivation from the latin manufestus: struck with the hand. This is being posted publicly just for the benefit of one reader, myself, so that when I start to lose focus I'll have a nice big road sign to remind me that I should approach my writing with the seriousness that I want it to have.
February 20, 2004
Tigons and Ligers and Bears, oh my!
by peterbI knew about hinnies and mules, of course, but in the "everyone else probably already knew this but I just found out" department, today I learned that it is possible to crossbreed lions and tigers, creating ligers and tigons.
I don't know why this comes as a surprise to me. I knew that you could crossbreed an asian leopard cat and felis sylvestris catus and end up with a bengal cat, but I never envisioned that two cats as different as lions and tigers could crossbreed. I think they look amazingly cool, but I bet the mating procedure is fraught with peril.
Intriguingly, ligers and tigons (a liger has a male lion as a parent, and tigon has a male tiger as a parent) have very different characteristics, with ligers being prone to gigantism and tigons being prone to dwarfism.
Other interesting hybrids include breeding a serval to a caracal, a sheep to a goat, and a zebra to everything on the planet
Cats sleeping with dogs! Kids playing loud rock and roll music! WHERE IS IT ALL GOING TO END?
February 19, 2004
Perforce SCM on OS X
by peterbAt the behest of one of the commentators here, I decided to try to download and install Perforce on my OS X laptop and see how the experience compares to installing it on FreeBSD or Windows. It worked fine for me, but I can understand how one would find the experience confusing, particularly if you're not familiar with the unix side of the world. Here's some notes on the experience.
I have a few advantages over your novice user, in that I've been using Perforce for about four years, and so already understand the model it uses, and why it is better than CVS -- how the repository stores data, what the relationship between a client, a branch, and the repository is -- so I deliberately tried to forget as much as I could and approach this with a fresh mind. I tried to follow the instructions as written, rather than filling in the blanks with things I've learned over the years.
Finding the packages themselves was no problem; Perforce has a page for Mac downloads, and there is client, server, and the visual client software available, along with some extra Macintosh-specific installation notes.
Installing the server was fairly easy; per the install notes I used the Darwin p4d software installation. I then became root via 'sudo', copied the p4d to /usr/local/bin, and did a "chmod a+x /usr/local/bin/p4d". /usr/local/bin is already in my path, so I didn't have to worry about that. But to make a long story short, I agree with reader David Trevas about this point -- it really wouldn't have killed Perforce to have provided a 5 line shell script to do this busywork for me.
Likewise, I installed the client and set up my P4PORT environment variable correctly, but then I already knew how to do that. The installation notes are indeed a bit scatterbrained about this. Hey, Perforce, you're already printing a 3-line error message when my P4PORT is set wrong -- instead of just saying "check $P4PORT", why not spend the extra 25 characters or so to give an example of how to set it to something well formed? Or point to a URL that explains it?
In the release notes, I saw a disclaimer that, at first, I simply didn't believe: "IMPORTANT: The Perforce server on Darwin is case-insensitive, unlike Perforce servers on other UNIX platforms." Ouch. "That can't be right, I thought." I was all ready to be angry at Perforce for being stupid, when I decided to do a little experimenting, and learned Today's Fun Fact: HFS+ is a case insensitive filesystem. This is so mind-numbingly stupid that I'm actually kind of mad at Apple about it, although I guess it's a well known bug^H^H^H"feature" of HFS+. Apparently the bug is fixed in 10.3 Server, which provides an option to make the filesystem not suck so much. So my formal recommendation would have to be "Don't run p4d on 10.2 or earlier, and if you run it on 10.3, turn on the 'make the filesystem not suck' bit (you can't actually just twiddle a bit; you have to newfs the filesystem as case-sensitive HFS. That, in turn, will probably cause a bunch of your other applications to break).
This is actually going to present a problem to anyone using the p4 client -- or, in fact, any other version control system -- which has files in a directory with filenames that are identical apart from case. In my quick tests, the behavior can be described as "last sync wins" -- if you have two files, "test.txt" and "TEST.TXT", and you sync, whichever is synced last overwrites the other. If you sync a deletion of TEST.TXT, test.txt gets deleted. Sloppy, Apple, really sloppy. OK, I know the filesystem is just doing what it's specced to do -- but what it is specced to do is dumb. For what it's worth, Windows/NTFS seems to have the same behavior. I think it's dumb there, too.
Apart from the case problems, which will afflict any SCM system on this platform, the p4 client worked as expected -- I was able to connect to both the p4d I just started up on my Powerbook, and also to the Perforce server at work. I tried their visual client "p4v", but didn't love it overmuch -- right off the bat it feels like it's missing important functionality. One major point of a visual client is to make things easier for newcomers, it seems to me, and since p4v won't even let you create a client workspace from the login screen, it seems kind of pointless to me. Perhaps it gets "better" once you're used to it, but I quickly reverted to just using the command line client. In p4v's favor, it comes in a .sit file and is a clickable OS X app, suitable for installation in your Applications folder. Hopefully future revisions will make it a bit more fully-featured
In summary: Perforce++ for having a readily available OS X client. Perforce++ for providing a powerful and reliable SCM system. And Perforce-- for not making their installation procedures as easy as they should be.
For another perspective from someone actually using Perforce on the Mac for Real Work, read Chris Hanson's journal (the 3:19 pm entry).
I use Perforce day in and day out, but in a nearly pure Unix environment. I'd be interested in feedback from those of you who run p4 clients on MacOS regularly, and I'm very interested to know if there are any of you out there using MacOS as a p4 server. Please share your experience in the comments section below, if you would.
February 17, 2004
Ralph, Don't Run!
by peterbPlease, Ralph, don't run for President. (Requires Flash)
Update: Mark Fiore's take on the same issue.
Rant: Feser is a Nutbar
by peterbPZ over at Pharyngula links to the second part of Edward Feser's inchoate screed:
"Whatever bland official statement of purpose might appear in the introduction to a modern university's college catalog, its true raison d'etre is in practice nothing other than to destroy utterly whatever allegiance a young person might have to traditional conceptions in morality, religion, politics and culture, to "do dirt" on the faith of his fathers, on his country, and on what most human beings have historically understood to be the imperatives of decency. It is, in short, to propagate Leftism. "
I commented on the forums there something along the lines of "Wow, Feser sure is a nutbar." Someone else took me to task for not refuting his freakish diatribe point by point and otherwise treating him as a serious scholar. So part of me wants to respond to that innocent soul in more detail.
But I think the epithet "nutbar" is about all the response Feser deserves.
I'm not particularly trying to convince anyone of the "wrongness" of his position, because Feser doesn't actually present any actual arguments in his two pages of urine-stained crayon scribblings. Sometimes -- and surprisingly, I think Feser would agree with this point! -- it's important to stand up and use common sense and say out loud that the Emperor has no clothes (or, in this case, that the creepy homeless guy talking to himself in the subway doesn't seem to have all of his faculties).
The issue isn't disagreeing with the premise of the article, the issue is that the article doesn't present anything at all.
Feser states that the modern University's raison d'etre is to destroy young people's allegiance to decency, because Universities are anti-Western, and do not tolerate dissent from their hegemonic, communist, hedonistic, freud-believing, Christian-hating (blah blah blah pot-smoking, fornication having, Che-Guavara t-shirt wearing, my name is Edward Feser and I'm a complete nutbar) orthodoxy. Without going into the (crazy) details of his (crazy) justification for his (crazy) claims, let's just look at the questions that are raised by his claims, prima facie.
(1) What is a "modern University?" I bet Oral Roberts University is surprised to know that they hate the Judeo-Christian tradition. Maybe Feser didn't mean them, but we don't know because he doesn't say. What percentage of modern Universities are serving their Soviet masters? Feser's rhetoric is sweepingly inclusive, so the reader has to assume he means "all of them," which, surprise surprise, is insane.
(2) What does it mean to be "anti-Western"? If 95% of an essential Western institution is "anti-some-notion" and "pro-some-other-notion", then isn't maybe that an indication that, by definition, they're not anti-Western? (This is kinda a distraction, because of course his nutbar claim that "Universities," which he doesn't define, are "anti-Western" is utterly unsupported by any evidence at all; he certainly doesn't actually condescend to provide any.)
(3) Universities don't tolerate dissent, except for this one courageous Professor at Loyola Marymount University, holed up in a bell tower with a rifle and a supply of Grape Ne-Hi and Powerbars. Well, ok, there are a few others. Well, ok, it turns out there are a lot of others. You said it yourself, Colin -- you've had professors who were conservative. So have I. So, frankly, has everyone. Does anyone here remember taking the economics class where they talked about how capitalism sucked and communism was the only right economic system? Hey, me neither, because most economics classes in modern universities don't hold the nutbar beliefs that Feser claims they indoctrinate their students with. Is there one professor out there that holds the positions Feser invents? Tell you what: find that guy, and I'll call him a nutbar, too.
Feser's entire point is not that "some professors are liberals." I think all of us -- and by "us" I mean "all of us that aren't stupid and insane" -- would agree that it is perfectly appropriate for there to be liberal -- even hyper-liberal -- professors at Universities, just as it is appropriate for there to be conservative -- even hyper-conservative -- professors at Universities. It is even possible to make a reasoned argument that the numbers of professors of a given political bent are predominant, or that there are so many that they crush dissent. Feser doesn't even begin to try to make such an argument; from the very first sentence of the first part of his article, he assumes the conclusion he claims to be proving, and then rants about how awful it is.
An important part of the Western tradition is that academics should at least try to provide evidence for their arguments. Feser doesn't. It bothers me that people are getting sidetracked saying things like "Oh yeah, he's right because once I had this professor who was really really liberal," when that's not what he's claiming- he's claiming that the institutions are pervasively anti-Western and that their purpose is to corrupt young minds. That, I maintain, is observably false by anyone who actually bother to, y'know, observe.
In Pittsburgh, there used to be a woman who would hang out outside the stadium and say to everyone who walked past "Go inside -- it's going to rain fire in three days. Go inside! It's going to rain fire in three days!" You couldn't engage her in conversation. People would say "But you said that to me five days ago!" and she would just smile and say "Go inside!" In other words, arguing with her was pointless because as regards the very claims she was making we could look around and observe with our own eyes that they were egregiously, terribly, and pathetically false. So it is with Feser.
There. That's the long version of "He's a nutbar."
it's more than he deserves.
February 15, 2004
Games
by psuI bought myself an xbox for my birthday 4 or 5 months ago. I've never been a big gamer. Over the last decade or so, I've played the odd shooter game for a while and then stopped again for years at a time. But, I heard that Half-life 2 was going to be coming out soon, and since I don't own a PC, I figured an xbox would be a cheaper way to get that game.
I was also interested in the xbox since a lot of my friends had them, and I had seen at least one game that interested me a lot: Project Gotham.
So I bought the box, a cheap copy of Gotham, and borrowed a few other games. For the first few months, I would play Gotham once in a while, but nothing serious.
Then a BAD THING happened. Actually, 3 bad things.
1. Project Gotham 2
2. Xbox Live
3. Counterstrike.
Now I feel that a wierd shift has happened in my brain. Some kind of chemical imbalance. I find myself scanning game review sites. I bought not one, but TWO football simulation games, and I'm not even much of a football fan (in my defense, my copy of Madden 2003 crashes all the time, so I had to get something else).
Anyway. Project Gotham 2 is just an excellent version 2 of perhaps the best Xbox game that there is. The game-play and the structure of the "Kudos world series" is nearly perfect. They've made the game challenging, but not impossible for even feeble old non-gamers like me to make it to the end and drive at least some of the coolest cars.
And, the Live aspect is super even if you don't want to actually race live against your buddies. This is because there is a global scoreboard for all the races that is an endless source of obsessive competition.
Counterstrike is the sort of first person shooter that had traditionally been my favorite kind of game. Not much back story, lots of guns, and short rounds that you can start and then quit 15 minutes at a time. Perfect. But, I never thought I'd play this kind of thing on the xbox because of "the controller problem". Well, I still have controller problems (the two sticks don't seem to allow me to whip around 180 degrees as fast as the mouse), but the game is great anyway, especially since I never played it on the PC.
So, in summary, the first hit wasn't free, but I somehow got myself hooked on the crack anyway. Someone must pay.
February 14, 2004
Final Cut Pro: What Do You Do When Your Soundtrack Sucks?
by peterbLast night I finished my first large project, a 25 minute oral history documentary about my grandmother. I've been working on it, on and off, for about six months. When I started, I didn't know anything, really, about video production, and now, through the arduous process of making tons of mistakes, I probably still don't know enough to claim to be skilled. However, I am very good at screwing things up. So I do at least have some information to share with you, which is "Here's what I learned by screwing up."
The most important thing I learned is: in a video project, what you hear is more important than what you see.
That's a pretty controversial statement, so let me explain a little bit. What I really mean to say is that producing good (or at least acceptable) audio for a video is harder than producing good (or at least acceptable) video. It's harder for a few reasons: when we are shooting video with 'basic' equipment (a camera, some lights, a tripod) we have better previews available to us of video than we do of audio. Deciding whether the video is right is easier than deciding if the audio is right. We have better and more powerful editing techniques to cover up subtle video mistakes in postproduction than we do to cover up audio mistakes. Discontinuity in video has a different (and easier to compensate for) psychological effect on film and video viewers than discontinuity in audio.
So with that in mind, let's talk about some of the ways you can improve the audio on your video production. Up front let me say that I'm not trying to tell you how to improve the quality of the audio during the shoot. I'm far too inexperienced to be able to advise even a five year old child on such matters, so if your goal is "get better audio into my editing software," I will simply recommend that you pick up a copy of Jay Rose's superb book Producing Great Sound for Digital Video, and follow that. This article, instead, is targeted at people who, like me, have already screwed up their project and have captured crappy audio. How do you know if you've captured crappy audio? Here are some rules of thumb: if you have thought more about lighting then about sound, your audio is inadequate. If you used an on-camera mic, your audio is inadequate. If you didn't have someone monitoring the audio during the shoot while you concentrated on the visual aspects, your audio is probably inadequate.
Don't use any audio from your camera
Getting decent audio quality is, as I observed, hard. It requires forethought. It requires some expense -- a mic and perhaps a mixer or balanced adapter. It requires effort and care during the shoot. And it requires attention during postproduction. So one of the easiest ways to deal with all of this is: punt! Plan to not use any in-camera sound at all. Making a film of your kid's soccer game? Find a piece of music you like and use that in the background. Making a short film for fun? Use silent film techniques and dialogue cards. Making a nature film? Record voice-overs later using better equipment than you have in the field. Obviously, you can't take this route for every (or even most) projects, but it's worth mentioning as a reasonable choice.
Listen Carefully
Video is easy to evaluate. Put an image up on the screen; your eyes will instantly evaluate it and you can say things like "That's too dark," "It's out of focus," "The frame is unbalanced." Audio can be trickier to tease out. Often, the idiosyncracities of an audio mix are only apparent when listening at a certain volume, or in a room with certain acoustical properties. With my present project, I knew I had problems with the audio mix, but I thought I had brought them mostly under control. When I previewed the rough cut for my grandmother, who has hearing problems, we turned the volume up substantially. All of the blips and transition problems that were barely -- or not -- perceptible when listening at a normal volume became monstrously apparent and distracting.
The lesson here is: I was editing on a computer. But the video was watched in an entertainment room. No one watches TV from 10 feet away with the volume as low as they do when working on a computer one foot away. Either turn those speakers waaaaaay up or, better yet, get a set of good headphones and always use them while editing.
Room Tone
For each scene that you edit, find a gap where no one is talking and cut it into its own short clip. This is the "room tone" for that scene. When you're done with your edit, look along your timeline and find any places where there are audio gaps (on my project, for example, I used title cards to introduce certain segments; those title cards didn't have any audio with them. The first time I watched the video at full volume the first cut to the title card, with its hard, sharp, cut to no audio made me wince). Insert room tone -- hopefully from the scene you are in -- into those gaps. You can use the crossfade audio transition effect to smooth the transition between your room tone and the 'real' soundtrack, but if you play your cards right you shouldn't even have to do that. If you feel that room tone is inappropriate for your project, consider using an audio fade-in/fade-out filter to lessen the harshness of jumps to and from black.
The ear is superb at detecting discontinuities. If you have a jump from room tone to no sound at all your viewers will hear it and be discomfitted by it, even if they aren't consciously aware of it. Know room tone. Love room tone. Use room tone.
The rule of thumb here is "even room tone that sounds bad is probably better than no room tone at all." Another rule of thumb is: when you're previewing your video, pay as much attention to the quality of transitions between scenes as you do to the content and quality of the scenes themselves.
Postprocessing
So far I've found Final Cut Pro's audio filters to be less than thrilling. They don't seem to work very well to me, and the range of control they give you is very limited. However, the filters included with Apple's bundled Soundtrack program are superb, and Soundtrack integrates very smoothly with Final Cut. For almost any dialogue-driven project where you haven't carefully mixed at capture time you can't go far wrong in putting a compressor filter into the loop. If you have lots of fricatives and plosives in the mix you can use a limiter to smooth them out. If there's a constant background noise and you are very very lucky, you might be able to use the band pass filter to reduce its impact. Play around with the filters in Soundtrack. Explore.
Listen Without Watching
When you think you're approaching the sort of sound mix you want for your project, hook your computer up to good speakers or headphones, get a notebook and a pen, play your project, and turn off the monitor. Just listen. Listen carefully. I guarantee you you'll find at least three problems that you didn't notice when you previewed your project while watching. Make notes on the problems when you find them, and go back to the drawing board to see how you can correct them.
I hope you've found this little discussion of what I learned by screwing up my project helpful. If you have tips of your own, feel free to use the comments section below to share them.
February 13, 2004
A Fistful of Hobbits
by peterbThis article at the Wall Street Journal suggests that if MGM and New Line can't work out the details of distributing The Hobbit, perhaps Jackson could invent a new story that takes place in the Lord of the Rings universe.
We have a few suggestions.
February 11, 2004
Physicians for the Unethical Misuse of Patient Information
by peterbSo I've been following this "PCRM reveals that Atkins was fat when he died!" trainwreck for a while now, and I have a few comments.
I think it is a great example of how ideology is more important to interest groups than facts. I think it is a great example of how invalid reasoning is used by interest groups to arrive at conclusions not supported by the evidence they cite. And, most importantly of all, I think it is a great example of how the people need to be protected from the medical community.
I should say that I have been on a nominally low-carb diet for about a year and a half now. I lost about 60 pounds in 6 months, and am now at a very maintainable 200. I don't eat steak for every meal and I don't mainline bacon fat. Basically, I eat lots of fish, meat, and green vegetables, and avoid sugar (except a moderate amount of fresh fruit), and I don't overindulge in bread or pasta. I find the diet to be very maintainable.
I've never found any other diet -- in the sense of "way of eating" -- that let me lose weight without being miserable. I react very noticeably to the presence of sugar or starches in my diet -- I get lots of energy, and then I get sleepy. In other words, I seem to be the sort of "Syndrome X" type Gary Taubes described in his New York Times Magazine article What If It's All a Big Fat Lie?"
The interesting thing is, I'm sure a low carb diet isn't for everyone. I know people that can eat pasta until they're blue in the face but never gain weight. We may be one species, but we each have individual and unique metabolisms. I have no doubt that an Atkins-like diet would be a pure disaster for some folks. I think we all need to find our own way of eating.
There are plenty of criticisms one can make of low carb diets, not the least of which is that it is reaching the point where it is ubiquitous enough in our society to be annoying, just like vegetarianism.
When I tried to eat vegetarian -- and I've tried several times -- I found I gained weight dramatically. Should I publish a press release saying "Vegetarian diets make you fat?" No. You can't look at a single data point and deduce a trend. The people at the Physicians Committee for Responsible Medicine, which some claim is a front group for PETA (not that that should affect how we evaluate their claims) are presumably trained in the scientific method, and know this. So when I see the Director of PCRM making statements that say, in effect, "Dr. Atkins died obese, therefore the diet he promoted doesn't work," I can only assume they are being deliberately deceptive, rather than just ignorant.
It's even more galling because it is a selective presentation of the evidence. Yes, we can believe from the medical records that Atkins weighed 258 pounds (which, incidentally, is about what I weighed when I started my diet) when he died, yet the same records show that when he was admitted to the hospital after his fall, he weighed 195. Furthermore, my sources indicate that PCRM Director Neal Barnard actually was a key aide to President Bush during the run-up to the Iraq war, and helped evaluate the evidence as to whether Iraq had weapons of mass destruction, (note: I am lying) so his cherry-picking only that evidence which supports his ideological axe is par for the course. (
Update, via boing boing: Does this man look obese to you?)
Edema -- outrageous swelling -- is common in trauma cases. You hear people that have seen loved ones die with edema say that near the end they couldn't even recognize the victim. It is a horrific thing to see, and I hope that none of you ever have to see it. Imagine that this happened to someone you loved. Imagine that you had to watch their face inflate like a balloon as they slipped towards death.
Now imagine that, after they died, someone used that condition to score ideological points. There is a word in English for that sort of behavior. That word is "despicable".
But this isn't really what I wanted to talk to you about today. What I wanted to talk to you about was privacy.
Every time I go to the goddamn doctor, they hand me a goddamn piece of paper which purports to explain how they are in compliance with the Health Insurance Portablity and Accountability Act of 1996 (aka HIPAA), which is the fake law the government passed to pretend to protect the privacy of my medical records.
About once a month, we see some famous person's medical records exposed in the media. This time it's Dr. Atkins' turn. As near as I can tell, no one cares that his privacy was violated. If the medical community is this cavalier about violating the rights of people with enough money and power to sue them, I can only assume that they won't even pretend to care about my rights.
PCRM is against laboratory experiments on animals, but they're perfectly OK with fucking a corpse.
Put aside the issue of criminal culpability. Put aside the issue of civil liability. Director Barnard, don't you have an ethical responsibility to not disclose materials obtained in violation of a patient's privacy? And I don't mean what your state's medical licensing board requires. I mean a personal ethical responsibility.
Director Barnhard, I hope that, someday, you develop enough ethical consciousness to feel ashamed that you fucked a corpse.
February 09, 2004
What I'm Playing
by peterbA brief list:
- PC: Wizardry 8 (still), Day of Defeat.
- Xbox: Project Gotham Racing 2, GTA3. Counterstrike will be arriving soon (wakka-wakka-wakka-fruit).
- PS2: Fatal Frame 2, Mark of Kri.
- Gamecube: Naruto 2. Zelda is on hold.
February 05, 2004
Saying "Nader is a Fucktard" is not Censorship
by peterbLawrence Lessig has made some righteously angry observations about Ralph Nader who, in typically arrogant fashion, is going around saying stupid and wrongheaded things. Some other folks, notably Aaron Swartz are saying that Lessig is somehow "forgetting about the First Amendment." I respect Aaron, even when I disagree with him, so it's disappointing to see him making such a weak argument.
In particular, Aaron says:
As Nader said ( and Lessig obviously heard ), running for President is a First Amendment right, involving speech, press, association, and petitioning the government. And in America, we value our First Amendment rights more than the harm that they may cause.
Aaron, this argument is so bogus, dumb, and beneath you that I need to invent a new word to describe it: Squalmish. There. The argument that claiming that Nader is responsible for his actions, or asking Nader to take or not take some action somehow violates his First Amendment rights is amazingly squalmish. Incredibly squalmish. Squalmish to the point of absurdity, one might say.
Lessig doesn't need me to defend him. He does it for himself quite superbly, and I'm not even 1/16th of the lawyer that he is. But maybe I can frame the debate in more prosaic terms that explain exactly why some of have such a violent reaction to his claim of "censorship."
Nader (or any other idiot) is free to run for President, assuming he meets the Constitutional requirements, which of course he does. If the government were to outlaw his Presidential bid, that would be "censorship," of a sort. If there was a media conspiracy to not give him any air time, that would be another form of censorship, albeit not one that involved the First Amendment. Criticism, however, is not censorship. Criticism is in fact the antithesis of censorship. Nader is free to say and do what he wants, within the confines of the law. The rest of us (Lawrence Lessig, Melissa Block, or me, or anyone) are free to request that he not do so, or beg him to not do so, or to point out that by doing so he is serving the forces of darkness, or is an egotist, or is (quite simply) a fucktard.
It does not infringe on Nader's First Amendment rights to observe that he, fucktardedly, helped elect George Bush. It does not infringe on Nader's First Amendment rights for me to observe that if he does it again, he will continue to be acting like a fucktard. It is not censorship to ask, request, tell, or advise him not to run, or to criticize him when, as we all expect, he makes the wrong decision and runs anyway. The First Amendment guarantees freedom of speech. It does not, should not, and never will guarantee freedom from criticism.
Now stop being so squalmish, Aaron, and return to your usual, better, quality of argument.
February 04, 2004
Good Tools Are Worth Paying For
by peterbI definitely would not have survived the past couple of days at work without Perforce, the version control system of champions. I'm sure there are version control systems other than P4 that are just as good; it's just the particular flavor of crack that I'm addicted to. It just seems like the ubiquity of CVS, the de facto standard for open source projects, is yet another great example of Worse is Better.
Most open source projects you meet use CVS, which stands for "Concurrent Versions System." Everyone uses CVS for exactly two reasons:
- It's free.
- It's not quite as sucky as RCS, its free precursor
You'll even find advocates on the web explaining why you should use CVS even though it's totally broken. These advocates conveniently overlook the fact that CVS sucks. Let's review, shall we?
- CVS allegedly supports branches, but no one actually uses them, because they're too hard to use.
- CVS is the only version control system ever developed that can somehow manage to generate context diffs that are impossible to apply in a meaningful way.
- CVS doesn't do atomic checkins. This is a huge deal. So if you're changing, for example, a source file and a header file, it's entirely possible for the checkin of the header file to succeed and the checkin of the source file to fail. Congratulations! Your tool just broke the build. Also, without atomic checkins, you can't say "give me all the changes that Bill made at such-and-such a time" -- Bill has no way to indicate to CVS that his change to the header file and the source file are interrelated.
- No locking (see above).
- No (meaningful) automated merging; if someone made a change to the repository while you had a file checked out, your likely path to success is to generate a context diff by hand and apply it by hand.
- No meaningful way to rename files.
- Doesn't scale well to large projects (and yes, I know people use it for large projects anyway. It still doesn't scale well.)
- An inconsistent model leading to a clumsy CLI (I'm not complaining that the user interface is primarily via the command line interface; I'm complaining that the particulars of the command line interface aren't good).
- Litters your source tree with near-infinite amounts of annoying metadata.
So when people call CVS "version control" you need to realize that that's a very optimistic description. It's really more of a hope.
There's just something about this topic that makes people's willing to accept a level of badness that they wouldn't take for a minute, in, say, their OS, or their text editor, or their web browser. Do you think people would keep using Emacs, even though it is both free and Free, if, periodically, it just decided to corrupt the files you were editing? Yet when it comes to version control, people accept just that. For example, here's a CVS apologist I found on java.net:
"CVS expects you to trust merges. A lot of people don't, usually saying how they once spent hours resolviing [sic] a horrible merge. It doesn't occur to them that the problem was not source-control but bad process - two developers editing the same part of the same file, with at least one failing to synch with the server often enough."
Wow, imagine that -- people failed to realize that their version control system didn't actually control versions of the files they were working on. Those crazy kids with their loud rock and roll music and their hamburger sandwiches and their French fried potatoes!
Think about that claim for a minute. Implicit in it is the idea that any software process model where more than one developer is working on a given region of a file at a time is broken. I guess that's a workable theory, if you don't actually have deadlines, or competent developers. Perhaps the lack of deadline pressure is the whole story of why CVS is adequate for free software (as opposed to Free Software -- there are open source software products which are sold for money and therefore have some deadline pressure). But since CVS provides no locking or notification support, how do I know if someone else is working on the same files I'm working on, anyway? What's my process if I work in a company with 10 people? Do we all send out mail saying "I'm editing the fold_monkey() function in monkeybagel.c now"? What if I work in a company with 50 people? 100? 500?
The idea that process should adapt to the inadequacies of the tool is insane. When a tool is inadequate to good process, you should find a better tool. (In this particular case, the inadequacy is not "CVS doesn't do notification," but "CVS doesn't do atomic checkins or make complex merging workable.")
Here's another one that I just have to share, and let's be frank, mock:
"CVS is indeed the best version control system I've used."
Translation: CVS is indeed the only version control system I've used.
The best version control system I've used is Perforce.
So what is Perforce? Perforce is basically CVS if CVS wasn't designed by hamsters on crystal meth. Just about every problem you can identify with CVS is fixed in perforce. There are command line tools that are disturbingly similar to CVS's, with the difference that they actually work. There's a Windows interface, if that's your thing. There's a (good) web interface. Hell, there's even a Half-Life interface to p4. Multiple users working on the same file works intuitively in p4 (the integration procedure is pure love). Multiple branches and/or multiple client views into the same or different branches are easy as pie. You can manage multiple changelists, all checkins are atomic, it's backed by a database, and reverting to earlier versions of the tree (or a branch, or a file) is simple.
The basic unit of a P4 checkin being a "changelist" has another consequence: you don't check in files to Perforce, you check in a changelist. So if I say "I want to see the specific fix that Bill checked in that fixes bug 14982," it's trivial, no matter how many files it touched. Try doing that in CVS.
And, most importantly, if you're working on a project with more than three people, you won't want to go hang yourself every time you need to check out or check in a file. At least not because of the version control system.
Random links: This discussion thread over at Joel on Software discusses some of the differences in a slightly less antagonistic tone than I've used here. It's a good read. Ned Batchelder blogged a bit about this too. Dana Epp is looking for advice on which SCM system to use. I plan to tell him to use Perforce; drop by and counter my advice if you disagree.
Perforce is a commercial product, but it is free (as in beer) for two users or less, and they're pretty liberal about issuing evaluation licenses so you can see what you're getting yourself into. It may seem comparatively expensive at $750/seat (less as you buy more seats) once you are working on a big project, but at my company, at least, it has been worth every single penny. And they claim that they offer free licenses to Open Source projects. So check it out.
What version control do you use? Have you killed anyone yet?
Update: See my article about using Perforce on Mac OS X.
February 02, 2004
Hitchens, Danner, Power, and Frum
by peterbRajeev Advani posts his notes on the debate between Christopher Hitchens, Mark Danner, Samantha Power, and David Frum on the topic (of course) of "Iraq and Beyond." Thanks to Rajeev for preserving his view of the record.
Update: Part two of the debate is described here.