773e2317f086 initial-docs

Merge with default.
[view raw] [browse files]
author Steve Losh <steve@stevelosh.com>
date Thu, 01 Jul 2010 19:32:49 -0400
parents 85c5e6d231b3 (current diff) 33f5127b20fb (diff)
children 84b11de68417
branches/tags initial-docs
files review/extension_ui.py review/file_templates.py review/web_ui.py

Changes

--- a/README.markdown	Tue Jun 15 20:30:23 2010 -0400
+++ b/README.markdown	Thu Jul 01 19:32:49 2010 -0400
@@ -16,9 +16,9 @@
 Installing
 ==========
 
-`hg-review` requires Mercurial (probably 1.3.1+) and Python 2.5+. It requires
-a few other things too, but they're bundled with the extension so you don't
-need to worry about them.
+`hg-review` requires Mercurial 1.6+ and Python 2.5+. It requires a few other
+things too, but they're bundled with the extension so you don't need to worry
+about them.
 
 First, get hg-review:
 
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/bundled/markdown2/CHANGES.txt	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,246 @@
+# python-markdown2 Changelog
+
+## python-markdown2 v1.0.1.17
+
+- [Issue 36] Fix "cuddled-lists" extra handling for an
+  looks-like-a-cuddled-list-but-is-indented block. See the
+  "test/tm-cases/cuddled_list_indented.text" test case.
+
+- Experimental new "toc" extra. The returned string from conversion will have
+  a `toc_html` attribute.
+
+- New "header-ids" extra that will add an `id` attribute to headers:
+
+        # My First Section
+
+  will become:
+
+        <h1 id="my-first-section">My First Section</h1>
+
+  An argument can be give for the extra, which will be used as a prefix for
+  the ids:
+  
+        $ cat foo.txt 
+        # hi there
+        $ python markdown2.py foo.txt 
+        <h1>hi there</h1>
+        $ python markdown2.py foo.txt -x header-ids
+        <h1 id="hi-there">hi there</h1>
+        $ python markdown2.py foo.txt -x header-ids=prefix
+        <h1 id="prefix-hi-there">hi there</h1>
+
+- Preliminary support for "html-classes" extra: takes a dict mapping HTML tag
+  to the string value to use for a "class" attribute for that emitted tag.
+  Currently just supports "pre" and "code" for code *blocks*.
+
+
+## python-markdown2 v1.0.1.16
+
+- [Issue 33] Implement a "cuddled-lists" extra that allows:
+
+        I did these things:
+        * bullet1
+        * bullet2
+        * bullet3
+
+  to be converted to:
+
+        <p>I did these things:</p>
+
+        <ul>
+        <li>bullet1</li>
+        <li>bullet2</li>
+        <li>bullet3</li>
+        </ul>
+
+
+## python-markdown2 v1.0.1.15
+
+- [Issue 30] Fix a possible XSS via JavaScript injection in a carefully
+  crafted image reference (usage of double-quotes in the URL).
+
+## python-markdown2 v1.0.1.14
+
+- [Issue 29] Fix security hole in the md5-hashing scheme for handling HTML
+  chunks during processing.
+- [Issue 27] Fix problem with underscores in footnotes content (with
+  "footnotes" extra).
+
+## python-markdown2 v1.0.1.13
+
+- [Issue 24] Set really long sentinel for max-length of link text to avoid
+  problems with reasonably long ones.
+- [Issue 26] Complete the fix for this issue. Before this change the
+  randomized obscuring of 'mailto:' link letters would sometimes result
+  in emails with underscores getting misinterpreted as for italics.
+
+## python-markdown2 v1.0.1.12
+
+- [Issue 26] Fix bug where email auto linking wouldn't work for emails with
+  underscores. E.g. `Mail me: <foo_bar@example.com>` wouldn't work.
+- Update MANIFEST.in to ensure bin/markdown2 gets included in sdist.
+- [Issue 23] Add support for passing options to pygments for the "code-color"
+  extra. For example:
+
+        >>> markdown("...", extras={'code-color': {"noclasses": True}})
+
+  This `formatter_opts` dict is passed to the pygments HtmlCodeFormatter.
+  Patch from 'svetlyak.40wt'.
+- [Issue 21] Escape naked '>' characters, as is already done for '&' and '<'
+  characters. Note that other markdown implementations (both Perl and PHP) do
+  *not* do this. This results in differing output with two 3rd-party tests:
+  "php-markdown-cases/Backslash escapes.text" and "markdowntest-cases/Amps
+  and angle encoding.tags".
+- "link-patterns" extra: Add support for the href replacement being a
+  callable, e.g.:
+  
+        >>> link_patterns = [
+        ...     (re.compile("PEP\s+(\d+)", re.I),
+        ...      lambda m: "http://www.python.org/dev/peps/pep-%04d/" % int(m.group(1))),
+        ... ]
+        >>> markdown2.markdown("Here is PEP 42.", extras=["link-patterns"],
+        ...     link_patterns=link_patterns)
+        u'<p>Here is <a href="http://www.python.org/dev/peps/pep-0042/">PEP 42</a>.</p>\n'
+
+## python-markdown2 v1.0.1.11
+
+- Fix syntax_color test for the latest Pygments.
+- [Issue 20] Can't assume that `sys.argv` is defined at top-level code --
+  e.g. when used at a PostreSQL stored procedure. Fix that.
+
+## python-markdown2 v1.0.1.10
+
+- Fix sys.path manipulation in setup.py so `easy_install markdown2-*.tar.gz`
+  works. (Henry Precheur pointed out the problem.)
+- "bin/markdown2" is now a stub runner script rather than a symlink to
+  "lib/markdown2.py". The symlink was a problem for sdist: tar makes it a
+  copy.
+- Added 'xml' extra: passes *one-liner* XML processing instructions and
+  namespaced XML tags without wrapping in a `<p>` -- i.e. treats them as a HTML
+  block tag.
+
+## python-markdown2 v1.0.1.9
+
+- Fix bug in processing text with two HTML comments, where the first comment
+  is cuddled to other content. See "test/tm-cases/two_comments.text". Noted
+  by Wolfgang Machert.
+- Revert change in v1.0.1.6 passing XML processing instructions and one-liner
+  tags. This changed caused some bugs. Similar XML processing support will
+  make it back via an "xml" extra.
+
+## python-markdown2 v1.0.1.8
+
+- License note updates to facilitate Thomas Moschny building a package for
+  Fedora Core Linux. No functional change.
+
+## python-markdown2 v1.0.1.7
+
+- Add a proper setup.py and release to pypi:
+  http://pypi.python.org/pypi/markdown2/
+- Move markdown2.py module to a lib subdir. This allows one to put the "lib"
+  dir of a source checkout (e.g. via an svn:externals) on ones Python Path
+  without have the .py files at the top-level getting in the way.
+
+## python-markdown2 v1.0.1.6
+
+- Fix Python 2.6 deprecation warning about the `md5` module.
+- Pass XML processing instructions and one-liner tags. For example:
+
+        <?blah ...?>
+        <xi:include xmlns:xi="..." />
+
+  Limitations: they must be on one line. Test: pi_and_xinclude.
+  Suggested by Wolfgang Machert.
+
+## python-markdown2 v1.0.1.5
+
+- Add ability for 'extras' to have arguments. Internally the 'extras'
+  attribute of the Markdown class is a dict (it was a set).
+- Add "demote-headers" extra that will demote the markdown for, e.g., an h1
+  to h2-6 by the number of the demote-headers argument.
+      
+        >>> markdown('# this would be an h1', extras={'demote-headers': 2})
+        u'<h3>this would be an h1</h3>\n'
+  
+  This can be useful for user-supplied Markdown content for a sub-section of
+  a page.
+
+## python-markdown2 v1.0.1.4
+
+- [Issue 18] Allow spaces in the URL for link definitions.
+- [Issue 15] Fix some edge cases with backslash-escapes.
+- Fix this error that broken command-line usage:
+
+        NameError: global name 'use_file_vars' is not defined
+
+- Add "pyshell" extra for auto-codeblock'ing Python interactive shell
+  sessions even if they weren't properly indented by the tab width.
+
+## python-markdown2 v1.0.1.3
+
+- Make the use of the `-*- markdown-extras: ... -*-` emacs-style files
+  variable to set "extras" **off** be default. It can be turned on via
+  `--use-file-vars` on the command line and `use_file_vars=True` via the
+  module interface.
+- [Issue 3] Drop the code-color extra hack added *for* issue3 that was
+  causing the a unicode error with unicode in a code-colored block,
+  <http://code.google.com/p/python-markdown2/issues/detail?id=3#c8>
+
+## python-markdown2 v1.0.1.2
+
+- [Issue 8] Alleviate some of the incompat of the last change by allowing (at
+  the Python module level) the usage of `safe_mode=True` to mean what it used
+  to -- i.e. "replace" safe mode.
+- [Issue 8, **incompatible change**] The "-s|--safe" command line option and
+  the equivalent "safe_mode" option has changed semantics to be a string
+  instead of a boolean. Legal values of the string are "replace" (the old
+  behaviour: literal HTML is replaced with "[HTML_REMOVED]") and "escape"
+  (meta chars in literal HTML is escaped).
+- [Issue 11] Process markup in footnote definition bodies.
+- Add support for `-*- markdown-extras: ... -*-` emacs-style files variables
+  (typically in an XML comment) to set "extras" for the markdown conversion.
+- [Issue 6] Fix problem with footnotes if the reference string had uppercase
+  letters.
+
+## python-markdown2 v1.0.1.1
+
+- [Issue 3] Fix conversion of unicode strings.
+- Make the "safe_mode" replacement test overridable via subclassing: change
+  `Markdown.html_removed_text`.
+- [Issue 2] Fix problems with "safe_mode" removing generated HTML, instead of
+  just raw HTML in the text.
+- Add "-s|--safe" command-line option to set "safe_mode" conversion
+  boolean. This option is mainly for compat with markdown.py.
+- Add "link-patterns" extra: allows one to specify a list of regexes that
+  should be automatically made into links. For example, one can define a
+  mapping for things like "Mozilla Bug 1234":
+        
+        regex:  mozilla\s+bug\s+(\d+)
+        href:   http://bugzilla.mozilla.org/show_bug.cgi?id=\1
+  
+  See <http://code.google.com/p/python-markdown2/wiki/Extras> for details.
+- Add a "MarkdownWithExtras" class that enables all extras (except
+  "code-friendly"):
+    
+        >>> import markdown2
+        >>> converter = markdown2.MarkdownWithExtras()
+        >>> converter.convert('...TEXT...')
+        ...HTML...
+
+- [Issue 1] Added "code-color" extra: pygments-based (TODO: link) syntax
+  coloring of code blocks. Requires the pygments Python library on sys.path.
+  See <http://code.google.com/p/python-markdown2/wiki/Extras> for details.
+- [Issue 1] Added "footnotes" extra: adds support for footnotes syntax. See
+  <http://code.google.com/p/python-markdown2/wiki/Extras> for details.
+
+## python-markdown2 v1.0.1.0
+
+- Added "code-friendly" extra: disables the use of leading and trailing `_`
+  and `__` for emphasis and strong. These can easily get in the way when
+  writing docs about source code with variable_list_this and when one is not
+  careful about quoting.
+- Full basic Markdown syntax.
+
+
+(Started maintaining this log 15 Oct 2007. At that point there had been no
+releases of python-markdown2.)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/bundled/markdown2/CONTRIBUTORS.txt	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,3 @@
+Trent Mick (primary author)
+Thomas Moschny (redhat packaging, https://bugzilla.redhat.com/show_bug.cgi?id=461692)
+Massimo Di Pierro (security fix, issue 29)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/bundled/markdown2/LICENSE.txt	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,58 @@
+This implementation of Markdown is licensed under the MIT License:
+
+    The MIT License
+
+    Copyright (c) 2008 ActiveState Software Inc.
+
+    Permission is hereby granted, free of charge, to any person obtaining a
+    copy of this software and associated documentation files (the
+    "Software"), to deal in the Software without restriction, including
+    without limitation the rights to use, copy, modify, merge, publish,
+    distribute, sublicense, and/or sell copies of the Software, and to permit
+    persons to whom the Software is furnished to do so, subject to the
+    following conditions:
+
+    The above copyright notice and this permission notice shall be included
+    in all copies or substantial portions of the Software.
+
+    THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+    OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+    MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN
+    NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
+    DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+    OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
+    USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+
+All files in a *source package* of markdown2 (i.e. those available on
+pypi.python.org and the Google Code project "downloads" page) are under the
+MIT license.  However, in the *subversion repository* there are some files
+(used for performance and testing purposes) that are under different licenses
+as follows:
+
+- perf/recipes.pprint
+
+  Python License. This file includes a number of real-world examples of
+  Markdown from the ActiveState Python Cookbook, used for doing some
+  performance testing of markdown2.py.
+
+- test/php-markdown-cases/...
+  test/php-markdown-extra-cases/...
+
+  GPL. These are from the MDTest package announced here:
+  http://six.pairlist.net/pipermail/markdown-discuss/2007-July/000674.html
+
+- test/markdown.py
+
+  GPL 2 or BSD. A copy (currently old) of Python-Markdown -- the other
+  Python Markdown implementation.
+
+- test/markdown.php
+
+  BSD-style. This is PHP Markdown
+  (http://michelf.com/projects/php-markdown/).
+
+- test/Markdown.pl: BSD-style
+
+  A copy of Perl Markdown (http://daringfireball.net/projects/markdown/).
+
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/bundled/markdown2/Makefile.py	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,663 @@
+
+"""Makefile for the python-markdown2 project.
+
+${common_task_list}
+
+See `mk -h' for options.
+"""
+
+import sys
+import os
+from os.path import join, dirname, normpath, abspath, exists, basename
+import re
+import webbrowser
+from pprint import pprint
+
+from mklib.common import MkError
+from mklib import Task
+from mklib.sh import run_in_dir
+
+
+
+class bugs(Task):
+    """Open bug database page."""
+    def make(self):
+        webbrowser.open("http://code.google.com/p/python-markdown2/issues/list")
+
+class site(Task):
+    """Open the Google Code project page."""
+    def make(self):
+        webbrowser.open("http://code.google.com/p/python-markdown2/")
+
+class sdist(Task):
+    """python setup.py sdist"""
+    def make(self):
+        run_in_dir("%spython setup.py sdist -f --formats zip"
+                % _setup_command_prefix(),
+            self.dir, self.log.debug)
+
+class pypi_upload(Task):
+    """Update release to pypi."""
+    def make(self):
+        tasks = (sys.platform == "win32"
+                 and "bdist_wininst upload"
+                 or "sdist --formats zip upload")
+        run_in_dir("%spython setup.py %s" % (_setup_command_prefix(), tasks),
+            self.dir, self.log.debug)
+
+        sys.path.insert(0, join(self.dir, "lib"))
+        url = "http://pypi.python.org/pypi/markdown2/"
+        import webbrowser
+        webbrowser.open_new(url)
+
+class googlecode_upload(Task):
+    """Upload sdist to Google Code project site."""
+    deps = ["sdist"]
+    def make(self):
+        helper_in_cwd = exists(join(self.dir, "googlecode_upload.py"))
+        if helper_in_cwd:
+            sys.path.insert(0, self.dir)
+        try:
+            import googlecode_upload
+        except ImportError:
+            raise MkError("couldn't import `googlecode_upload` (get it from http://support.googlecode.com/svn/trunk/scripts/googlecode_upload.py)")
+        if helper_in_cwd:
+            del sys.path[0]
+
+        sys.path.insert(0, join(self.dir, "lib"))
+        import markdown2
+        sdist_path = join(self.dir, "dist",
+            "markdown2-%s.zip" % markdown2.__version__)
+        status, reason, url = googlecode_upload.upload_find_auth(
+            sdist_path,
+            "python-markdown2", # project_name
+            "markdown2 %s source package" % markdown2.__version__, # summary
+            ["Featured", "Type-Archive"]) # labels
+        if not url:
+            raise MkError("couldn't upload sdist to Google Code: %s (%s)"
+                          % (reason, status))
+        self.log.info("uploaded sdist to `%s'", url)
+
+        project_url = "http://code.google.com/p/python-markdown2/"
+        import webbrowser
+        webbrowser.open_new(project_url)
+
+
+
+class test(Task):
+    """Run all tests (except known failures)."""
+    def make(self):
+        for ver, python in self._gen_pythons():
+            if ver < (2,3):
+                # Don't support Python < 2.3.
+                continue
+            elif ver >= (3, 0):
+                # Don't yet support Python 3.
+                continue
+            ver_str = "%s.%s" % ver
+            print "-- test with Python %s (%s)" % (ver_str, python)
+            assert ' ' not in python
+            run_in_dir("%s test.py -- -knownfailure" % python,
+                       join(self.dir, "test"))
+
+    def _python_ver_from_python(self, python):
+        assert ' ' not in python
+        o = os.popen('''%s -c "import sys; print(sys.version)"''' % python)
+        ver_str = o.read().strip()
+        ver_bits = re.split("\.|[^\d]", ver_str, 2)[:2]
+        ver = tuple(map(int, ver_bits))
+        return ver
+    
+    def _gen_python_names(self):
+        yield "python"
+        for ver in [(2,4), (2,5), (2,6), (2,7), (3,0), (3,1)]:
+            yield "python%d.%d" % ver
+            if sys.platform == "win32":
+                yield "python%d%d" % ver
+
+    def _gen_pythons(self):
+        sys.path.insert(0, join(self.dir, "externals", "which"))
+        import which  # get it from http://trentm.com/projects/which
+        python_from_ver = {}
+        for name in self._gen_python_names():
+            for python in which.whichall(name):
+                ver = self._python_ver_from_python(python)
+                if ver not in python_from_ver:
+                    python_from_ver[ver] = python
+        for ver, python in sorted(python_from_ver.items()):
+            yield ver, python
+        
+
+class todo(Task):
+    """Print out todo's and xxx's in the docs area."""
+    def make(self):
+        for path in _paths_from_path_patterns(['.'],
+                excludes=[".svn", "*.pyc", "TO""DO.txt", "Makefile.py",
+                          "*.png", "*.gif", "*.pprint", "*.prof",
+                          "tmp-*"]):
+            self._dump_pattern_in_path("TO\DO\\|XX\X", path)
+
+        path = join(self.dir, "TO""DO.txt")
+        todos = re.compile("^- ", re.M).findall(open(path, 'r').read())
+        print "(plus %d TODOs from TO""DO.txt)" % len(todos)
+
+    def _dump_pattern_in_path(self, pattern, path):
+        os.system("grep -nH '%s' '%s'" % (pattern, path))
+
+class pygments(Task):
+    """Get a copy of pygments in externals/pygments.
+
+    This will be used by the test suite.
+    """
+    def make(self):
+        pygments_dir = join(self.dir, "externals", "pygments")
+        if exists(pygments_dir):
+            run_in_dir("hg pull", pygments_dir, self.log.info)
+            run_in_dir("hg update", pygments_dir, self.log.info)
+        else:
+            if not exists(dirname(pygments_dir)):
+                os.makedirs(dirname(pygments_dir))
+            run_in_dir("hg clone http://dev.pocoo.org/hg/pygments-main %s"
+                        % basename(pygments_dir),
+                       dirname(pygments_dir), self.log.info)
+
+class announce_release(Task):
+    """Send a release announcement. Don't send this multiple times!."""
+    headers = {
+        "To": [
+            "python-markdown2@googlegroups.com",
+            "python-announce@python.org"
+        ],
+        "From": ["Trent Mick <trentm@gmail.com>"],
+        "Subject": "ANN: python-markdown2 %(version)s -- A fast and complete Python implementation of Markdown",
+        "Reply-To": "python-markdown2@googlegroups.com",
+    }
+    if False: # for dev/debugging
+        headers["To"] = ["trentm@gmail.com"]
+    
+    body = r"""
+        ### Where?
+
+        - Project Page: <http://code.google.com/p/python-markdown2/>
+        - PyPI: <http://pypi.python.org/pypi/markdown2/>
+
+        ### What's new?
+        
+        %(whatsnew)s
+        
+        Full changelog: <http://code.google.com/p/python-markdown2/source/browse/trunk/CHANGES.txt>
+        
+        ### What is 'markdown2'?
+        
+        `markdown2.py` is a fast and complete Python implementation of
+        [Markdown](http://daringfireball.net/projects/markdown/) -- a
+        text-to-HTML markup syntax.
+        
+        ### Module usage
+        
+            >>> import markdown2
+            >>> markdown2.markdown("*boo!*")  # or use `html = markdown_path(PATH)`
+            u'<p><em>boo!</em></p>\n'
+        
+            >>> markdowner = Markdown()
+            >>> markdowner.convert("*boo!*")
+            u'<p><em>boo!</em></p>\n'
+            >>> markdowner.convert("**boom!**")
+            u'<p><strong>boom!</strong></p>\n'
+
+        ### Command line usage
+        
+            $ cat hi.markdown
+            # Hello World!
+            $ markdown2 hi.markdown
+            <h1>Hello World!</h1>
+
+        This implementation of Markdown implements the full "core" syntax plus a
+        number of extras (e.g., code syntax coloring, footnotes) as described on
+        <http://code.google.com/p/python-markdown2/wiki/Extras>.
+
+        Cheers,
+        Trent
+
+        --
+        Trent Mick
+        trentm@gmail.com
+        http://trentm.com/blog/
+    """
+    
+    def _parse_changes_txt(self):
+        changes_txt = open(join(self.dir, "CHANGES.txt")).read()
+        sections = re.split(r'\n(?=##)', changes_txt)
+        for section in sections[1:]:
+            first, tail = section.split('\n', 1)
+            if "not yet released" in first:
+                continue
+            break
+
+        whatsnew_text = tail.strip()
+        version = first.strip().split()[-1]
+        if version.startswith("v"):
+            version = version[1:]
+
+        return version, whatsnew_text
+    
+    def make(self):
+        import getpass
+        if getpass.getuser() != "trentm":
+            raise RuntimeError("You're not `trentm`. That's not "
+                "expected here.")
+
+        version, whatsnew = self._parse_changes_txt()
+        data = {
+            "whatsnew": whatsnew,
+            "version": version,
+        }
+
+        headers = {}
+        for name, v in self.headers.items():
+            if isinstance(v, basestring):
+                value = v % data
+            else:
+                value = v
+            headers[name] = value
+        body = _dedent(self.body, skip_first_line=True) % data
+        
+        # Ensure all the footer lines end with two spaces: markdown syntax
+        # for <br/>.
+        lines = body.splitlines(False)
+        idx = lines.index("Cheers,") - 1
+        for i in range(idx, len(lines)):
+            lines[i] += '  '
+        body = '\n'.join(lines)
+
+        print "=" * 70, "body"
+        print body
+        print "=" * 70
+        answer = _query_yes_no(
+            "Send release announcement email for v%s to %s?" % (
+                version, ", ".join(self.headers["To"])),
+            default="no")
+        if answer != "yes":
+            return
+
+        sys.path.insert(0, join(self.dir, "lib"))
+        import markdown2
+        body_html = markdown2.markdown(body)
+        
+        email_it_via_gmail(headers, text=body, html=body_html)
+        self.log.info("announcement sent")
+
+
+
+#---- internal support stuff
+
+# Recipe http://code.activestate.com/recipes/576824/
+def email_it_via_gmail(headers, text=None, html=None, password=None):
+    """Send an email -- with text and HTML parts.
+    
+    @param headers {dict} A mapping with, at least: "To", "Subject" and
+        "From", header values. "To", "Cc" and "Bcc" values must be *lists*,
+        if given.
+    @param text {str} The text email content.
+    @param html {str} The HTML email content.
+    @param password {str} Is the 'From' gmail user's password. If not given
+        it will be prompted for via `getpass.getpass()`.
+    
+    Derived from http://code.activestate.com/recipes/473810/ and
+    http://stackoverflow.com/questions/778202/smtplib-and-gmail-python-script-problems
+    """
+    from email.MIMEMultipart import MIMEMultipart
+    from email.MIMEText import MIMEText
+    import smtplib
+    import getpass
+    
+    if text is None and html is None:
+        raise ValueError("neither `text` nor `html` content was given for "
+            "sending the email")
+    if not ("To" in headers and "From" in headers and "Subject" in headers):
+        raise ValueError("`headers` dict must include at least all of "
+            "'To', 'From' and 'Subject' keys")
+
+    # Create the root message and fill in the from, to, and subject headers
+    msg_root = MIMEMultipart('related')
+    for name, value in headers.items():
+        msg_root[name] = isinstance(value, list) and ', '.join(value) or value
+    msg_root.preamble = 'This is a multi-part message in MIME format.'
+
+    # Encapsulate the plain and HTML versions of the message body in an
+    # 'alternative' part, so message agents can decide which they want
+    # to display.
+    msg_alternative = MIMEMultipart('alternative')
+    msg_root.attach(msg_alternative)
+
+    # Attach HTML and text alternatives.
+    if text:
+        msg_text = MIMEText(text.encode('utf-8'))
+        msg_alternative.attach(msg_text)
+    if html:
+        msg_text = MIMEText(html.encode('utf-8'), 'html')
+        msg_alternative.attach(msg_text)
+
+    to_addrs = headers["To"] \
+        + headers.get("Cc", []) \
+        + headers.get("Bcc", [])
+    from_addr = msg_root["From"]
+    
+    # Get username and password.
+    from_addr_pats = [
+        re.compile(".*\((.+@.+)\)"),  # Joe (joe@example.com)
+        re.compile(".*<(.+@.+)>"),  # Joe <joe@example.com>
+    ]
+    for pat in from_addr_pats:
+        m = pat.match(from_addr)
+        if m:
+            username = m.group(1)
+            break
+    else:
+        username = from_addr
+    if not password:
+        password = getpass.getpass("%s's password: " % username)
+    
+    smtp = smtplib.SMTP('smtp.gmail.com', 587) # port 465 or 587
+    smtp.ehlo()
+    smtp.starttls()
+    smtp.ehlo()
+    smtp.login(username, password)
+    smtp.sendmail(from_addr, to_addrs, msg_root.as_string())
+    smtp.close()
+
+
+# Recipe: dedent (0.1.2)
+def _dedentlines(lines, tabsize=8, skip_first_line=False):
+    """_dedentlines(lines, tabsize=8, skip_first_line=False) -> dedented lines
+    
+        "lines" is a list of lines to dedent.
+        "tabsize" is the tab width to use for indent width calculations.
+        "skip_first_line" is a boolean indicating if the first line should
+            be skipped for calculating the indent width and for dedenting.
+            This is sometimes useful for docstrings and similar.
+    
+    Same as dedent() except operates on a sequence of lines. Note: the
+    lines list is modified **in-place**.
+    """
+    DEBUG = False
+    if DEBUG: 
+        print "dedent: dedent(..., tabsize=%d, skip_first_line=%r)"\
+              % (tabsize, skip_first_line)
+    indents = []
+    margin = None
+    for i, line in enumerate(lines):
+        if i == 0 and skip_first_line: continue
+        indent = 0
+        for ch in line:
+            if ch == ' ':
+                indent += 1
+            elif ch == '\t':
+                indent += tabsize - (indent % tabsize)
+            elif ch in '\r\n':
+                continue # skip all-whitespace lines
+            else:
+                break
+        else:
+            continue # skip all-whitespace lines
+        if DEBUG: print "dedent: indent=%d: %r" % (indent, line)
+        if margin is None:
+            margin = indent
+        else:
+            margin = min(margin, indent)
+    if DEBUG: print "dedent: margin=%r" % margin
+
+    if margin is not None and margin > 0:
+        for i, line in enumerate(lines):
+            if i == 0 and skip_first_line: continue
+            removed = 0
+            for j, ch in enumerate(line):
+                if ch == ' ':
+                    removed += 1
+                elif ch == '\t':
+                    removed += tabsize - (removed % tabsize)
+                elif ch in '\r\n':
+                    if DEBUG: print "dedent: %r: EOL -> strip up to EOL" % line
+                    lines[i] = lines[i][j:]
+                    break
+                else:
+                    raise ValueError("unexpected non-whitespace char %r in "
+                                     "line %r while removing %d-space margin"
+                                     % (ch, line, margin))
+                if DEBUG:
+                    print "dedent: %r: %r -> removed %d/%d"\
+                          % (line, ch, removed, margin)
+                if removed == margin:
+                    lines[i] = lines[i][j+1:]
+                    break
+                elif removed > margin:
+                    lines[i] = ' '*(removed-margin) + lines[i][j+1:]
+                    break
+            else:
+                if removed:
+                    lines[i] = lines[i][removed:]
+    return lines
+
+def _dedent(text, tabsize=8, skip_first_line=False):
+    """_dedent(text, tabsize=8, skip_first_line=False) -> dedented text
+
+        "text" is the text to dedent.
+        "tabsize" is the tab width to use for indent width calculations.
+        "skip_first_line" is a boolean indicating if the first line should
+            be skipped for calculating the indent width and for dedenting.
+            This is sometimes useful for docstrings and similar.
+    
+    textwrap.dedent(s), but don't expand tabs to spaces
+    """
+    lines = text.splitlines(1)
+    _dedentlines(lines, tabsize=tabsize, skip_first_line=skip_first_line)
+    return ''.join(lines)
+
+
+# Recipe: query_yes_no (1.0)
+def _query_yes_no(question, default="yes"):
+    """Ask a yes/no question via raw_input() and return their answer.
+    
+    "question" is a string that is presented to the user.
+    "default" is the presumed answer if the user just hits <Enter>.
+        It must be "yes" (the default), "no" or None (meaning
+        an answer is required of the user).
+
+    The "answer" return value is one of "yes" or "no".
+    """
+    valid = {"yes":"yes",   "y":"yes",  "ye":"yes",
+             "no":"no",     "n":"no"}
+    if default == None:
+        prompt = " [y/n] "
+    elif default == "yes":
+        prompt = " [Y/n] "
+    elif default == "no":
+        prompt = " [y/N] "
+    else:
+        raise ValueError("invalid default answer: '%s'" % default)
+
+    while 1:
+        sys.stdout.write(question + prompt)
+        choice = raw_input().lower()
+        if default is not None and choice == '':
+            return default
+        elif choice in valid.keys():
+            return valid[choice]
+        else:
+            sys.stdout.write("Please respond with 'yes' or 'no' "\
+                             "(or 'y' or 'n').\n")
+
+
+# Recipe: paths_from_path_patterns (0.3.7)
+def _should_include_path(path, includes, excludes):
+    """Return True iff the given path should be included."""
+    from os.path import basename
+    from fnmatch import fnmatch
+
+    base = basename(path)
+    if includes:
+        for include in includes:
+            if fnmatch(base, include):
+                try:
+                    log.debug("include `%s' (matches `%s')", path, include)
+                except (NameError, AttributeError):
+                    pass
+                break
+        else:
+            try:
+                log.debug("exclude `%s' (matches no includes)", path)
+            except (NameError, AttributeError):
+                pass
+            return False
+    for exclude in excludes:
+        if fnmatch(base, exclude):
+            try:
+                log.debug("exclude `%s' (matches `%s')", path, exclude)
+            except (NameError, AttributeError):
+                pass
+            return False
+    return True
+
+_NOT_SPECIFIED = ("NOT", "SPECIFIED")
+def _paths_from_path_patterns(path_patterns, files=True, dirs="never",
+                              recursive=True, includes=[], excludes=[],
+                              on_error=_NOT_SPECIFIED):
+    """_paths_from_path_patterns([<path-patterns>, ...]) -> file paths
+
+    Generate a list of paths (files and/or dirs) represented by the given path
+    patterns.
+
+        "path_patterns" is a list of paths optionally using the '*', '?' and
+            '[seq]' glob patterns.
+        "files" is boolean (default True) indicating if file paths
+            should be yielded
+        "dirs" is string indicating under what conditions dirs are
+            yielded. It must be one of:
+              never             (default) never yield dirs
+              always            yield all dirs matching given patterns
+              if-not-recursive  only yield dirs for invocations when
+                                recursive=False
+            See use cases below for more details.
+        "recursive" is boolean (default True) indicating if paths should
+            be recursively yielded under given dirs.
+        "includes" is a list of file patterns to include in recursive
+            searches.
+        "excludes" is a list of file and dir patterns to exclude.
+            (Note: This is slightly different than GNU grep's --exclude
+            option which only excludes *files*.  I.e. you cannot exclude
+            a ".svn" dir.)
+        "on_error" is an error callback called when a given path pattern
+            matches nothing:
+                on_error(PATH_PATTERN)
+            If not specified, the default is look for a "log" global and
+            call:
+                log.error("`%s': No such file or directory")
+            Specify None to do nothing.
+
+    Typically this is useful for a command-line tool that takes a list
+    of paths as arguments. (For Unix-heads: the shell on Windows does
+    NOT expand glob chars, that is left to the app.)
+
+    Use case #1: like `grep -r`
+      {files=True, dirs='never', recursive=(if '-r' in opts)}
+        script FILE     # yield FILE, else call on_error(FILE)
+        script DIR      # yield nothing
+        script PATH*    # yield all files matching PATH*; if none,
+                        # call on_error(PATH*) callback
+        script -r DIR   # yield files (not dirs) recursively under DIR
+        script -r PATH* # yield files matching PATH* and files recursively
+                        # under dirs matching PATH*; if none, call
+                        # on_error(PATH*) callback
+
+    Use case #2: like `file -r` (if it had a recursive option)
+      {files=True, dirs='if-not-recursive', recursive=(if '-r' in opts)}
+        script FILE     # yield FILE, else call on_error(FILE)
+        script DIR      # yield DIR, else call on_error(DIR)
+        script PATH*    # yield all files and dirs matching PATH*; if none,
+                        # call on_error(PATH*) callback
+        script -r DIR   # yield files (not dirs) recursively under DIR
+        script -r PATH* # yield files matching PATH* and files recursively
+                        # under dirs matching PATH*; if none, call
+                        # on_error(PATH*) callback
+
+    Use case #3: kind of like `find .`
+      {files=True, dirs='always', recursive=(if '-r' in opts)}
+        script FILE     # yield FILE, else call on_error(FILE)
+        script DIR      # yield DIR, else call on_error(DIR)
+        script PATH*    # yield all files and dirs matching PATH*; if none,
+                        # call on_error(PATH*) callback
+        script -r DIR   # yield files and dirs recursively under DIR
+                        # (including DIR)
+        script -r PATH* # yield files and dirs matching PATH* and recursively
+                        # under dirs; if none, call on_error(PATH*)
+                        # callback
+    """
+    from os.path import basename, exists, isdir, join
+    from glob import glob
+
+    assert not isinstance(path_patterns, basestring), \
+        "'path_patterns' must be a sequence, not a string: %r" % path_patterns
+    GLOB_CHARS = '*?['
+
+    for path_pattern in path_patterns:
+        # Determine the set of paths matching this path_pattern.
+        for glob_char in GLOB_CHARS:
+            if glob_char in path_pattern:
+                paths = glob(path_pattern)
+                break
+        else:
+            paths = exists(path_pattern) and [path_pattern] or []
+        if not paths:
+            if on_error is None:
+                pass
+            elif on_error is _NOT_SPECIFIED:
+                try:
+                    log.error("`%s': No such file or directory", path_pattern)
+                except (NameError, AttributeError):
+                    pass
+            else:
+                on_error(path_pattern)
+
+        for path in paths:
+            if isdir(path):
+                # 'includes' SHOULD affect whether a dir is yielded.
+                if (dirs == "always"
+                    or (dirs == "if-not-recursive" and not recursive)
+                   ) and _should_include_path(path, includes, excludes):
+                    yield path
+
+                # However, if recursive, 'includes' should NOT affect
+                # whether a dir is recursed into. Otherwise you could
+                # not:
+                #   script -r --include="*.py" DIR
+                if recursive and _should_include_path(path, [], excludes):
+                    for dirpath, dirnames, filenames in os.walk(path):
+                        dir_indeces_to_remove = []
+                        for i, dirname in enumerate(dirnames):
+                            d = join(dirpath, dirname)
+                            if dirs == "always" \
+                               and _should_include_path(d, includes, excludes):
+                                yield d
+                            if not _should_include_path(d, [], excludes):
+                                dir_indeces_to_remove.append(i)
+                        for i in reversed(dir_indeces_to_remove):
+                            del dirnames[i]
+                        if files:
+                            for filename in sorted(filenames):
+                                f = join(dirpath, filename)
+                                if _should_include_path(f, includes, excludes):
+                                    yield f
+
+            elif files and _should_include_path(path, includes, excludes):
+                yield path
+
+def _setup_command_prefix():
+    prefix = ""
+    if sys.platform == "darwin":
+        # http://forums.macosxhints.com/archive/index.php/t-43243.html
+        # This is an Apple customization to `tar` to avoid creating
+        # '._foo' files for extended-attributes for archived files.
+        prefix = "COPY_EXTENDED_ATTRIBUTES_DISABLE=1 "
+    return prefix
+
+
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/bundled/markdown2/PKG-INFO	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,27 @@
+Metadata-Version: 1.0
+Name: markdown2
+Version: 1.0.1.17
+Summary: markdown2: A fast and complete Python implementaion of Markdown.
+Home-page: http://code.google.com/p/python-markdown2/
+Author: Trent Mick
+Author-email: trentm@gmail.com
+License: http://www.opensource.org/licenses/mit-license.php
+Description: Markdown is a text-to-HTML filter; it translates an easy-to-read /
+        easy-to-write structured text format into HTML.  Markdown's text
+        format is most similar to that of plain text email, and supports
+        features such as headers, *emphasis*, code blocks, blockquotes, and
+        links.  -- http://daringfireball.net/projects/markdown/
+        
+        This is a fast and complete Python implementation of the Markdown
+        spec.
+        
+Platform: any
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Intended Audience :: Developers
+Classifier: License :: OSI Approved :: MIT License
+Classifier: Programming Language :: Python
+Classifier: Operating System :: OS Independent
+Classifier: Topic :: Software Development :: Libraries :: Python Modules
+Classifier: Topic :: Software Development :: Documentation
+Classifier: Topic :: Text Processing :: Filters
+Classifier: Topic :: Text Processing :: Markup :: HTML 
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/bundled/markdown2/README.txt	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,95 @@
+markdown2 README
+================
+
+This is a fast and complete Python implementation of Markdown, a text-to-html
+markup system as defined here:
+
+    http://daringfireball.net/projects/markdown/syntax
+
+
+Install
+-------
+
+To install it in your Python installation run:
+
+    python setup.py install
+
+However, everything you need to run this is in "lib/markdown2.py". If it is
+easier for you, you can just copy that file to somewhere on your PythonPath
+(to use as a module) or executable path (to use as a script).
+
+
+Quick Usage
+-----------
+
+As a module:
+
+    >>> import markdown2
+    >>> markdown2.markdown("*boo!*")  # or use `html = markdown_path(PATH)`
+    u'<p><em>boo!</em></p>\n'
+
+    >>> markdowner = Markdown()
+    >>> markdowner.convert("*boo!*")
+    u'<p><em>boo!</em></p>\n'
+    >>> markdowner.convert("**boom!**")
+    u'<p><strong>boom!</strong></p>\n'
+
+As a script:
+
+    $ python markdown2.py foo.txt > foo.html
+
+See the project pages, "lib/markdown2.py" docstrings and/or 
+`python markdown2.py --help` for more details.
+
+
+Project
+-------
+
+The python-markdown2 project lives here (subversion repo, issue tracker,
+wiki):
+
+    http://code.google.com/p/python-markdown2/
+
+To checkout the full sources:
+
+    svn checkout http://python-markdown2.googlecode.com/svn/trunk/ python-markdown2
+
+To report a bug:
+
+    http://code.google.com/p/python-markdown2/issues/list
+
+
+License
+-------
+
+This project is licensed under the MIT License. 
+
+Note that in the subversion repository there are a few files (for the test
+suite and performance metrics) that are under different licenses. These files
+are *not* included in source packages. See LICENSE.txt for details.
+
+
+Test Suite
+----------
+
+This markdown implementation passes a fairly extensive test suite. To run it:
+
+    cd test && python test.py
+
+If you have the [mk](http://svn.openkomodo.com/openkomodo/browse/mk/trunk)
+tool installed you can run the test suite with all available Python versions
+by running:
+
+    mk test
+
+The crux of the test suite is a number of "cases" directories -- each with a
+set of matching .text (input) and .html (expected output) files. These are:
+
+    tm-cases/                   Tests authored for python-markdown2
+    markdowntest-cases/         Tests from the 3rd-party MarkdownTest package
+    php-markdown-cases/         Tests from the 3rd-party MDTest package
+    php-markdown-extra-cases/   Tests also from MDTest package
+
+See the wiki page for full details:
+http://code.google.com/p/python-markdown2/wiki/TestingNotes
+
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/bundled/markdown2/TODO.txt	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,85 @@
+- add "html-classes" extra to wiki
+- bug: can't have '<\w+' in a code span or code block with safe_mode if there
+  is a '>' somewhere later in the document. E.g. code.as.com-beta/CHANGES.md.
+  It captures all of that. Right answer is to not count code spans or code
+  blocks.
+  - add an issue for this
+  - test cases
+  - idea: better sanitation re-write? lot of work
+  - idea: Change all <,>,& emission from markdown processing to something
+    like {LT}, {GT}, {AMP}, {OPENTAG:$tag[:$class]} (first checking for
+    conflicts and escaping those out of the way). Then do sanitization at the
+    end:
+        escape: escape all <,>,& with entities
+        remove: not supported
+        whitelist: (new) build a reasonable default whitelist of patterns to
+            keep. Takes "extras" argument (and hook for subclassing) to
+            for custom whitelist. Google Code (was it?) had some list
+            of reasonable whitelist stuff.
+    Then unescape these special chars. The use of OPENTAG above would make
+    "html-classes" extra trivial.
+
+- fix the r135 xml option, add xml extra for it (see email)
+- look at http://code.google.com/p/markdownsharp/
+- add description of pyshell and demote-headers extras to wiki
+- to bring up on markdown-discuss:
+    - the trailing '#' escaping in DoHeaders (provide a patch for this)
+    - the discussion of backticks and backslash-escapes in code spans:
+        also bring in python-markdown-discuss on this
+    - The link for backslash escapes doesn't mention '>', but I believe it
+      should -- viz Markdown.pl's `%g_escape_table` which *does* include '>'.
+      TODO: bring this up on markdown-discuss list.
+- wiki: add an "Other Markdown implementations page"
+    http://daringfireball.net/projects/markdown/
+    http://www.michelf.com/projects/php-markdown/
+    http://www.freewisdom.org/projects/python-markdown/Features
+- test safe_mode on HTML in footnotes
+- compare setup.py stuff from Yannick to what I have now. Also:
+    http://gitorious.org/projects/git-python/repos/mainline/trees/master
+    http://www.python.org/~jeremy/weblog/030924.html
+- http://www.freewisdom.org/projects/python-markdown/Available_Extensions
+- Extras.wiki desc of code-color option. Not sure I love the ":::name"
+  markup for the lexer name.
+- find more unicode edge cases (look for any usage of md5() and make that
+  unicode)
+- update MDTest 1.1? (see
+  http://six.pairlist.net/pipermail/markdown-discuss/2007-September/000815.html)
+  update MDTest tests from http://git.michelf.com/mdtest/
+- I see ref to Markdown.pl 1.0.2
+  (http://six.pairlist.net/pipermail/markdown-discuss/2007-August/000756.html)
+  Update to that? Yes. Copy, at least, in showdown package.
+- take a look at other examples/test-cases from
+  http://adlcommunity.net/help.php?file=advanced_markdown.html
+- googlecode site: Why another Python impl? Test info. Usage/Features page.
+- get on http://en.wikipedia.org/wiki/Markdown
+- ask about remaining two MarkdownTest test failures
+- put in recipes site
+- perhaps some extras from Maruku and PHP Markdown extra
+  (http://maruku.rubyforge.org/maruku.html#extra)
+    - tables (tho I don't really like the syntax, prefer google codes, see
+      below)
+    - markdown inside literal HTML (if 'markdown="1|true"' attr)
+    - automatic toc generation (wanted that anyway, no a fan of maruku syntax
+      for this)
+    - weird markup in headers and links (does markdown2.py handle this?)
+    - meta-data syntax? One example of this is ids for headers. How about
+      automatically assigning header ids from the name (a la rest)?
+    - at-the-top email-style headers?
+    - maruku's footnote links are 'fn:1' and 'fnref:1' for a footnote id of
+      'blah'. If this is the PHP Markdown Extras way, then should follow
+      that.
+- googlecode wiki markup ideas?
+  (http://code.google.com/p/support/wiki/WikiSyntax)
+    - ~~strikeout~~
+    - ||tables||simple||syntax||
+- <http://daringfireball.net/2004/12/markdown_licensing> at bottom has a wish
+  list:
+    - simple "cite" for blockquote. How about:
+        [Zaphod Breeblebrox]
+        > blah blah
+        > blah
+- do perf comparison with the other Markdown impls (if compare horribly then
+  do something about it)
+- submit a Markdown.py (or .pl?) fix based on revision 1895 (on tm svn)
+- see about using html5lib (for speed and/or for better raw HTML handling)
+- see about plugins (SmartyPants, others available)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/bundled/markdown2/lib/markdown2.py	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,2048 @@
+#!/usr/bin/env python
+# Copyright (c) 2007-2008 ActiveState Corp.
+# License: MIT (http://www.opensource.org/licenses/mit-license.php)
+
+r"""A fast and complete Python implementation of Markdown.
+
+[from http://daringfireball.net/projects/markdown/]
+> Markdown is a text-to-HTML filter; it translates an easy-to-read /
+> easy-to-write structured text format into HTML.  Markdown's text
+> format is most similar to that of plain text email, and supports
+> features such as headers, *emphasis*, code blocks, blockquotes, and
+> links.
+>
+> Markdown's syntax is designed not as a generic markup language, but
+> specifically to serve as a front-end to (X)HTML. You can use span-level
+> HTML tags anywhere in a Markdown document, and you can use block level
+> HTML tags (like <div> and <table> as well).
+
+Module usage:
+
+    >>> import markdown2
+    >>> markdown2.markdown("*boo!*")  # or use `html = markdown_path(PATH)`
+    u'<p><em>boo!</em></p>\n'
+
+    >>> markdowner = Markdown()
+    >>> markdowner.convert("*boo!*")
+    u'<p><em>boo!</em></p>\n'
+    >>> markdowner.convert("**boom!**")
+    u'<p><strong>boom!</strong></p>\n'
+
+This implementation of Markdown implements the full "core" syntax plus a
+number of extras (e.g., code syntax coloring, footnotes) as described on
+<http://code.google.com/p/python-markdown2/wiki/Extras>.
+"""
+
+cmdln_desc = """A fast and complete Python implementation of Markdown, a
+text-to-HTML conversion tool for web writers.
+
+Supported extras (see -x|--extras option below):
+* code-friendly: Disable _ and __ for em and strong.
+* code-color: Pygments-based syntax coloring of <code> sections.
+* cuddled-lists: Allow lists to be cuddled to the preceding paragraph.                           
+* footnotes: Support footnotes as in use on daringfireball.net and
+  implemented in other Markdown processors (tho not in Markdown.pl v1.0.1).
+* html-classes: Takes a dict mapping html tag names (lowercase) to a
+  string to use for a "class" tag attribute. Currently only supports
+  "pre" and "code" tags. Add an issue if you require this for other tags.
+* pyshell: Treats unindented Python interactive shell sessions as <code>
+  blocks.
+* link-patterns: Auto-link given regex patterns in text (e.g. bug number
+  references, revision number references).
+* xml: Passes one-liner processing instructions and namespaced XML tags.
+"""
+
+# Dev Notes:
+# - There is already a Python markdown processor
+#   (http://www.freewisdom.org/projects/python-markdown/).
+# - Python's regex syntax doesn't have '\z', so I'm using '\Z'. I'm
+#   not yet sure if there implications with this. Compare 'pydoc sre'
+#   and 'perldoc perlre'.
+
+__version_info__ = (1, 0, 1, 17) # first three nums match Markdown.pl
+__version__ = '1.0.1.17'
+__author__ = "Trent Mick"
+
+import os
+import sys
+from pprint import pprint
+import re
+import logging
+try:
+    from hashlib import md5
+except ImportError:
+    from md5 import md5
+import optparse
+from random import random, randint
+import codecs
+from urllib import quote
+
+
+
+#---- Python version compat
+
+if sys.version_info[:2] < (2,4):
+    from sets import Set as set
+    def reversed(sequence):
+        for i in sequence[::-1]:
+            yield i
+    def _unicode_decode(s, encoding, errors='xmlcharrefreplace'):
+        return unicode(s, encoding, errors)
+else:
+    def _unicode_decode(s, encoding, errors='strict'):
+        return s.decode(encoding, errors)
+
+
+#---- globals
+
+DEBUG = False
+log = logging.getLogger("markdown")
+
+DEFAULT_TAB_WIDTH = 4
+
+
+try:
+    import uuid
+except ImportError:
+    SECRET_SALT = str(randint(0, 1000000))
+else:
+    SECRET_SALT = str(uuid.uuid4())
+def _hash_ascii(s):
+    #return md5(s).hexdigest()   # Markdown.pl effectively does this.
+    return 'md5-' + md5(SECRET_SALT + s).hexdigest()
+def _hash_text(s):
+    return 'md5-' + md5(SECRET_SALT + s.encode("utf-8")).hexdigest()
+
+# Table of hash values for escaped characters:
+g_escape_table = dict([(ch, _hash_ascii(ch))
+                       for ch in '\\`*_{}[]()>#+-.!'])
+
+
+
+#---- exceptions
+
+class MarkdownError(Exception):
+    pass
+
+
+
+#---- public api
+
+def markdown_path(path, encoding="utf-8",
+                  html4tags=False, tab_width=DEFAULT_TAB_WIDTH,
+                  safe_mode=None, extras=None, link_patterns=None,
+                  use_file_vars=False):
+    fp = codecs.open(path, 'r', encoding)
+    text = fp.read()
+    fp.close()
+    return Markdown(html4tags=html4tags, tab_width=tab_width,
+                    safe_mode=safe_mode, extras=extras,
+                    link_patterns=link_patterns,
+                    use_file_vars=use_file_vars).convert(text)
+
+def markdown(text, html4tags=False, tab_width=DEFAULT_TAB_WIDTH,
+             safe_mode=None, extras=None, link_patterns=None,
+             use_file_vars=False):
+    return Markdown(html4tags=html4tags, tab_width=tab_width,
+                    safe_mode=safe_mode, extras=extras,
+                    link_patterns=link_patterns,
+                    use_file_vars=use_file_vars).convert(text)
+
+class Markdown(object):
+    # The dict of "extras" to enable in processing -- a mapping of
+    # extra name to argument for the extra. Most extras do not have an
+    # argument, in which case the value is None.
+    #
+    # This can be set via (a) subclassing and (b) the constructor
+    # "extras" argument.
+    extras = None
+
+    urls = None
+    titles = None
+    html_blocks = None
+    html_spans = None
+    html_removed_text = "[HTML_REMOVED]"  # for compat with markdown.py
+
+    # Used to track when we're inside an ordered or unordered list
+    # (see _ProcessListItems() for details):
+    list_level = 0
+
+    _ws_only_line_re = re.compile(r"^[ \t]+$", re.M)
+
+    def __init__(self, html4tags=False, tab_width=4, safe_mode=None,
+                 extras=None, link_patterns=None, use_file_vars=False):
+        if html4tags:
+            self.empty_element_suffix = ">"
+        else:
+            self.empty_element_suffix = " />"
+        self.tab_width = tab_width
+
+        # For compatibility with earlier markdown2.py and with
+        # markdown.py's safe_mode being a boolean, 
+        #   safe_mode == True -> "replace"
+        if safe_mode is True:
+            self.safe_mode = "replace"
+        else:
+            self.safe_mode = safe_mode
+
+        if self.extras is None:
+            self.extras = {}
+        elif not isinstance(self.extras, dict):
+            self.extras = dict([(e, None) for e in self.extras])
+        if extras:
+            if not isinstance(extras, dict):
+                extras = dict([(e, None) for e in extras])
+            self.extras.update(extras)
+        assert isinstance(self.extras, dict)
+        if "toc" in self.extras and not "header-ids" in self.extras:
+            self.extras["header-ids"] = None   # "toc" implies "header-ids"
+        self._instance_extras = self.extras.copy()
+        self.link_patterns = link_patterns
+        self.use_file_vars = use_file_vars
+        self._outdent_re = re.compile(r'^(\t|[ ]{1,%d})' % tab_width, re.M)
+
+    def reset(self):
+        self.urls = {}
+        self.titles = {}
+        self.html_blocks = {}
+        self.html_spans = {}
+        self.list_level = 0
+        self.extras = self._instance_extras.copy()
+        if "footnotes" in self.extras:
+            self.footnotes = {}
+            self.footnote_ids = []
+        if "header-ids" in self.extras:
+            self._count_from_header_id = {} # no `defaultdict` in Python 2.4
+
+    def convert(self, text):
+        """Convert the given text."""
+        # Main function. The order in which other subs are called here is
+        # essential. Link and image substitutions need to happen before
+        # _EscapeSpecialChars(), so that any *'s or _'s in the <a>
+        # and <img> tags get encoded.
+
+        # Clear the global hashes. If we don't clear these, you get conflicts
+        # from other articles when generating a page which contains more than
+        # one article (e.g. an index page that shows the N most recent
+        # articles):
+        self.reset()
+
+        if not isinstance(text, unicode):
+            #TODO: perhaps shouldn't presume UTF-8 for string input?
+            text = unicode(text, 'utf-8')
+
+        if self.use_file_vars:
+            # Look for emacs-style file variable hints.
+            emacs_vars = self._get_emacs_vars(text)
+            if "markdown-extras" in emacs_vars:
+                splitter = re.compile("[ ,]+")
+                for e in splitter.split(emacs_vars["markdown-extras"]):
+                    if '=' in e:
+                        ename, earg = e.split('=', 1)
+                        try:
+                            earg = int(earg)
+                        except ValueError:
+                            pass
+                    else:
+                        ename, earg = e, None
+                    self.extras[ename] = earg
+
+        # Standardize line endings:
+        text = re.sub("\r\n|\r", "\n", text)
+
+        # Make sure $text ends with a couple of newlines:
+        text += "\n\n"
+
+        # Convert all tabs to spaces.
+        text = self._detab(text)
+
+        # Strip any lines consisting only of spaces and tabs.
+        # This makes subsequent regexen easier to write, because we can
+        # match consecutive blank lines with /\n+/ instead of something
+        # contorted like /[ \t]*\n+/ .
+        text = self._ws_only_line_re.sub("", text)
+
+        if self.safe_mode:
+            text = self._hash_html_spans(text)
+
+        # Turn block-level HTML blocks into hash entries
+        text = self._hash_html_blocks(text, raw=True)
+
+        # Strip link definitions, store in hashes.
+        if "footnotes" in self.extras:
+            # Must do footnotes first because an unlucky footnote defn
+            # looks like a link defn:
+            #   [^4]: this "looks like a link defn"
+            text = self._strip_footnote_definitions(text)
+        text = self._strip_link_definitions(text)
+
+        text = self._run_block_gamut(text)
+
+        if "footnotes" in self.extras:
+            text = self._add_footnotes(text)
+
+        text = self._unescape_special_chars(text)
+
+        if self.safe_mode:
+            text = self._unhash_html_spans(text)
+
+        text += "\n"
+        
+        rv = UnicodeWithAttrs(text)
+        if "toc" in self.extras:
+            rv._toc = self._toc
+        return rv
+
+    _emacs_oneliner_vars_pat = re.compile(r"-\*-\s*([^\r\n]*?)\s*-\*-", re.UNICODE)
+    # This regular expression is intended to match blocks like this:
+    #    PREFIX Local Variables: SUFFIX
+    #    PREFIX mode: Tcl SUFFIX
+    #    PREFIX End: SUFFIX
+    # Some notes:
+    # - "[ \t]" is used instead of "\s" to specifically exclude newlines
+    # - "(\r\n|\n|\r)" is used instead of "$" because the sre engine does
+    #   not like anything other than Unix-style line terminators.
+    _emacs_local_vars_pat = re.compile(r"""^
+        (?P<prefix>(?:[^\r\n|\n|\r])*?)
+        [\ \t]*Local\ Variables:[\ \t]*
+        (?P<suffix>.*?)(?:\r\n|\n|\r)
+        (?P<content>.*?\1End:)
+        """, re.IGNORECASE | re.MULTILINE | re.DOTALL | re.VERBOSE)
+
+    def _get_emacs_vars(self, text):
+        """Return a dictionary of emacs-style local variables.
+
+        Parsing is done loosely according to this spec (and according to
+        some in-practice deviations from this):
+        http://www.gnu.org/software/emacs/manual/html_node/emacs/Specifying-File-Variables.html#Specifying-File-Variables
+        """
+        emacs_vars = {}
+        SIZE = pow(2, 13) # 8kB
+
+        # Search near the start for a '-*-'-style one-liner of variables.
+        head = text[:SIZE]
+        if "-*-" in head:
+            match = self._emacs_oneliner_vars_pat.search(head)
+            if match:
+                emacs_vars_str = match.group(1)
+                assert '\n' not in emacs_vars_str
+                emacs_var_strs = [s.strip() for s in emacs_vars_str.split(';')
+                                  if s.strip()]
+                if len(emacs_var_strs) == 1 and ':' not in emacs_var_strs[0]:
+                    # While not in the spec, this form is allowed by emacs:
+                    #   -*- Tcl -*-
+                    # where the implied "variable" is "mode". This form
+                    # is only allowed if there are no other variables.
+                    emacs_vars["mode"] = emacs_var_strs[0].strip()
+                else:
+                    for emacs_var_str in emacs_var_strs:
+                        try:
+                            variable, value = emacs_var_str.strip().split(':', 1)
+                        except ValueError:
+                            log.debug("emacs variables error: malformed -*- "
+                                      "line: %r", emacs_var_str)
+                            continue
+                        # Lowercase the variable name because Emacs allows "Mode"
+                        # or "mode" or "MoDe", etc.
+                        emacs_vars[variable.lower()] = value.strip()
+
+        tail = text[-SIZE:]
+        if "Local Variables" in tail:
+            match = self._emacs_local_vars_pat.search(tail)
+            if match:
+                prefix = match.group("prefix")
+                suffix = match.group("suffix")
+                lines = match.group("content").splitlines(0)
+                #print "prefix=%r, suffix=%r, content=%r, lines: %s"\
+                #      % (prefix, suffix, match.group("content"), lines)
+
+                # Validate the Local Variables block: proper prefix and suffix
+                # usage.
+                for i, line in enumerate(lines):
+                    if not line.startswith(prefix):
+                        log.debug("emacs variables error: line '%s' "
+                                  "does not use proper prefix '%s'"
+                                  % (line, prefix))
+                        return {}
+                    # Don't validate suffix on last line. Emacs doesn't care,
+                    # neither should we.
+                    if i != len(lines)-1 and not line.endswith(suffix):
+                        log.debug("emacs variables error: line '%s' "
+                                  "does not use proper suffix '%s'"
+                                  % (line, suffix))
+                        return {}
+
+                # Parse out one emacs var per line.
+                continued_for = None
+                for line in lines[:-1]: # no var on the last line ("PREFIX End:")
+                    if prefix: line = line[len(prefix):] # strip prefix
+                    if suffix: line = line[:-len(suffix)] # strip suffix
+                    line = line.strip()
+                    if continued_for:
+                        variable = continued_for
+                        if line.endswith('\\'):
+                            line = line[:-1].rstrip()
+                        else:
+                            continued_for = None
+                        emacs_vars[variable] += ' ' + line
+                    else:
+                        try:
+                            variable, value = line.split(':', 1)
+                        except ValueError:
+                            log.debug("local variables error: missing colon "
+                                      "in local variables entry: '%s'" % line)
+                            continue
+                        # Do NOT lowercase the variable name, because Emacs only
+                        # allows "mode" (and not "Mode", "MoDe", etc.) in this block.
+                        value = value.strip()
+                        if value.endswith('\\'):
+                            value = value[:-1].rstrip()
+                            continued_for = variable
+                        else:
+                            continued_for = None
+                        emacs_vars[variable] = value
+
+        # Unquote values.
+        for var, val in emacs_vars.items():
+            if len(val) > 1 and (val.startswith('"') and val.endswith('"')
+               or val.startswith('"') and val.endswith('"')):
+                emacs_vars[var] = val[1:-1]
+
+        return emacs_vars
+
+    # Cribbed from a post by Bart Lateur:
+    # <http://www.nntp.perl.org/group/perl.macperl.anyperl/154>
+    _detab_re = re.compile(r'(.*?)\t', re.M)
+    def _detab_sub(self, match):
+        g1 = match.group(1)
+        return g1 + (' ' * (self.tab_width - len(g1) % self.tab_width))
+    def _detab(self, text):
+        r"""Remove (leading?) tabs from a file.
+
+            >>> m = Markdown()
+            >>> m._detab("\tfoo")
+            '    foo'
+            >>> m._detab("  \tfoo")
+            '    foo'
+            >>> m._detab("\t  foo")
+            '      foo'
+            >>> m._detab("  foo")
+            '  foo'
+            >>> m._detab("  foo\n\tbar\tblam")
+            '  foo\n    bar blam'
+        """
+        if '\t' not in text:
+            return text
+        return self._detab_re.subn(self._detab_sub, text)[0]
+
+    _block_tags_a = 'p|div|h[1-6]|blockquote|pre|table|dl|ol|ul|script|noscript|form|fieldset|iframe|math|ins|del'
+    _strict_tag_block_re = re.compile(r"""
+        (                       # save in \1
+            ^                   # start of line  (with re.M)
+            <(%s)               # start tag = \2
+            \b                  # word break
+            (.*\n)*?            # any number of lines, minimally matching
+            </\2>               # the matching end tag
+            [ \t]*              # trailing spaces/tabs
+            (?=\n+|\Z)          # followed by a newline or end of document
+        )
+        """ % _block_tags_a,
+        re.X | re.M)
+
+    _block_tags_b = 'p|div|h[1-6]|blockquote|pre|table|dl|ol|ul|script|noscript|form|fieldset|iframe|math'
+    _liberal_tag_block_re = re.compile(r"""
+        (                       # save in \1
+            ^                   # start of line  (with re.M)
+            <(%s)               # start tag = \2
+            \b                  # word break
+            (.*\n)*?            # any number of lines, minimally matching
+            .*</\2>             # the matching end tag
+            [ \t]*              # trailing spaces/tabs
+            (?=\n+|\Z)          # followed by a newline or end of document
+        )
+        """ % _block_tags_b,
+        re.X | re.M)
+
+    def _hash_html_block_sub(self, match, raw=False):
+        html = match.group(1)
+        if raw and self.safe_mode:
+            html = self._sanitize_html(html)
+        key = _hash_text(html)
+        self.html_blocks[key] = html
+        return "\n\n" + key + "\n\n"
+
+    def _hash_html_blocks(self, text, raw=False):
+        """Hashify HTML blocks
+
+        We only want to do this for block-level HTML tags, such as headers,
+        lists, and tables. That's because we still want to wrap <p>s around
+        "paragraphs" that are wrapped in non-block-level tags, such as anchors,
+        phrase emphasis, and spans. The list of tags we're looking for is
+        hard-coded.
+
+        @param raw {boolean} indicates if these are raw HTML blocks in
+            the original source. It makes a difference in "safe" mode.
+        """
+        if '<' not in text:
+            return text
+
+        # Pass `raw` value into our calls to self._hash_html_block_sub.
+        hash_html_block_sub = _curry(self._hash_html_block_sub, raw=raw)
+
+        # First, look for nested blocks, e.g.:
+        #   <div>
+        #       <div>
+        #       tags for inner block must be indented.
+        #       </div>
+        #   </div>
+        #
+        # The outermost tags must start at the left margin for this to match, and
+        # the inner nested divs must be indented.
+        # We need to do this before the next, more liberal match, because the next
+        # match will start at the first `<div>` and stop at the first `</div>`.
+        text = self._strict_tag_block_re.sub(hash_html_block_sub, text)
+
+        # Now match more liberally, simply from `\n<tag>` to `</tag>\n`
+        text = self._liberal_tag_block_re.sub(hash_html_block_sub, text)
+
+        # Special case just for <hr />. It was easier to make a special
+        # case than to make the other regex more complicated.   
+        if "<hr" in text:
+            _hr_tag_re = _hr_tag_re_from_tab_width(self.tab_width)
+            text = _hr_tag_re.sub(hash_html_block_sub, text)
+
+        # Special case for standalone HTML comments:
+        if "<!--" in text:
+            start = 0
+            while True:
+                # Delimiters for next comment block.
+                try:
+                    start_idx = text.index("<!--", start)
+                except ValueError, ex:
+                    break
+                try:
+                    end_idx = text.index("-->", start_idx) + 3
+                except ValueError, ex:
+                    break
+
+                # Start position for next comment block search.
+                start = end_idx
+
+                # Validate whitespace before comment.
+                if start_idx:
+                    # - Up to `tab_width - 1` spaces before start_idx.
+                    for i in range(self.tab_width - 1):
+                        if text[start_idx - 1] != ' ':
+                            break
+                        start_idx -= 1
+                        if start_idx == 0:
+                            break
+                    # - Must be preceded by 2 newlines or hit the start of
+                    #   the document.
+                    if start_idx == 0:
+                        pass
+                    elif start_idx == 1 and text[0] == '\n':
+                        start_idx = 0  # to match minute detail of Markdown.pl regex
+                    elif text[start_idx-2:start_idx] == '\n\n':
+                        pass
+                    else:
+                        break
+
+                # Validate whitespace after comment.
+                # - Any number of spaces and tabs.
+                while end_idx < len(text):
+                    if text[end_idx] not in ' \t':
+                        break
+                    end_idx += 1
+                # - Must be following by 2 newlines or hit end of text.
+                if text[end_idx:end_idx+2] not in ('', '\n', '\n\n'):
+                    continue
+
+                # Escape and hash (must match `_hash_html_block_sub`).
+                html = text[start_idx:end_idx]
+                if raw and self.safe_mode:
+                    html = self._sanitize_html(html)
+                key = _hash_text(html)
+                self.html_blocks[key] = html
+                text = text[:start_idx] + "\n\n" + key + "\n\n" + text[end_idx:]
+
+        if "xml" in self.extras:
+            # Treat XML processing instructions and namespaced one-liner
+            # tags as if they were block HTML tags. E.g., if standalone
+            # (i.e. are their own paragraph), the following do not get 
+            # wrapped in a <p> tag:
+            #    <?foo bar?>
+            #
+            #    <xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="chapter_1.md"/>
+            _xml_oneliner_re = _xml_oneliner_re_from_tab_width(self.tab_width)
+            text = _xml_oneliner_re.sub(hash_html_block_sub, text)
+
+        return text
+
+    def _strip_link_definitions(self, text):
+        # Strips link definitions from text, stores the URLs and titles in
+        # hash references.
+        less_than_tab = self.tab_width - 1
+    
+        # Link defs are in the form:
+        #   [id]: url "optional title"
+        _link_def_re = re.compile(r"""
+            ^[ ]{0,%d}\[(.+)\]: # id = \1
+              [ \t]*
+              \n?               # maybe *one* newline
+              [ \t]*
+            <?(.+?)>?           # url = \2
+              [ \t]*
+            (?:
+                \n?             # maybe one newline
+                [ \t]*
+                (?<=\s)         # lookbehind for whitespace
+                ['"(]
+                ([^\n]*)        # title = \3
+                ['")]
+                [ \t]*
+            )?  # title is optional
+            (?:\n+|\Z)
+            """ % less_than_tab, re.X | re.M | re.U)
+        return _link_def_re.sub(self._extract_link_def_sub, text)
+
+    def _extract_link_def_sub(self, match):
+        id, url, title = match.groups()
+        key = id.lower()    # Link IDs are case-insensitive
+        self.urls[key] = self._encode_amps_and_angles(url)
+        if title:
+            self.titles[key] = title.replace('"', '&quot;')
+        return ""
+
+    def _extract_footnote_def_sub(self, match):
+        id, text = match.groups()
+        text = _dedent(text, skip_first_line=not text.startswith('\n')).strip()
+        normed_id = re.sub(r'\W', '-', id)
+        # Ensure footnote text ends with a couple newlines (for some
+        # block gamut matches).
+        self.footnotes[normed_id] = text + "\n\n"
+        return ""
+
+    def _strip_footnote_definitions(self, text):
+        """A footnote definition looks like this:
+
+            [^note-id]: Text of the note.
+
+                May include one or more indented paragraphs.
+
+        Where,
+        - The 'note-id' can be pretty much anything, though typically it
+          is the number of the footnote.
+        - The first paragraph may start on the next line, like so:
+            
+            [^note-id]:
+                Text of the note.
+        """
+        less_than_tab = self.tab_width - 1
+        footnote_def_re = re.compile(r'''
+            ^[ ]{0,%d}\[\^(.+)\]:   # id = \1
+            [ \t]*
+            (                       # footnote text = \2
+              # First line need not start with the spaces.
+              (?:\s*.*\n+)
+              (?:
+                (?:[ ]{%d} | \t)  # Subsequent lines must be indented.
+                .*\n+
+              )*
+            )
+            # Lookahead for non-space at line-start, or end of doc.
+            (?:(?=^[ ]{0,%d}\S)|\Z)
+            ''' % (less_than_tab, self.tab_width, self.tab_width),
+            re.X | re.M)
+        return footnote_def_re.sub(self._extract_footnote_def_sub, text)
+
+
+    _hr_res = [
+        re.compile(r"^[ ]{0,2}([ ]?\*[ ]?){3,}[ \t]*$", re.M),
+        re.compile(r"^[ ]{0,2}([ ]?\-[ ]?){3,}[ \t]*$", re.M),
+        re.compile(r"^[ ]{0,2}([ ]?\_[ ]?){3,}[ \t]*$", re.M),
+    ]
+
+    def _run_block_gamut(self, text):
+        # These are all the transformations that form block-level
+        # tags like paragraphs, headers, and list items.
+
+        text = self._do_headers(text)
+
+        # Do Horizontal Rules:
+        hr = "\n<hr"+self.empty_element_suffix+"\n"
+        for hr_re in self._hr_res:
+            text = hr_re.sub(hr, text)
+
+        text = self._do_lists(text)
+
+        if "pyshell" in self.extras:
+            text = self._prepare_pyshell_blocks(text)
+
+        text = self._do_code_blocks(text)
+
+        text = self._do_block_quotes(text)
+
+        # We already ran _HashHTMLBlocks() before, in Markdown(), but that
+        # was to escape raw HTML in the original Markdown source. This time,
+        # we're escaping the markup we've just created, so that we don't wrap
+        # <p> tags around block-level tags.
+        text = self._hash_html_blocks(text)
+
+        text = self._form_paragraphs(text)
+
+        return text
+
+    def _pyshell_block_sub(self, match):
+        lines = match.group(0).splitlines(0)
+        _dedentlines(lines)
+        indent = ' ' * self.tab_width
+        s = ('\n' # separate from possible cuddled paragraph
+             + indent + ('\n'+indent).join(lines)
+             + '\n\n')
+        return s
+        
+    def _prepare_pyshell_blocks(self, text):
+        """Ensure that Python interactive shell sessions are put in
+        code blocks -- even if not properly indented.
+        """
+        if ">>>" not in text:
+            return text
+
+        less_than_tab = self.tab_width - 1
+        _pyshell_block_re = re.compile(r"""
+            ^([ ]{0,%d})>>>[ ].*\n   # first line
+            ^(\1.*\S+.*\n)*         # any number of subsequent lines
+            ^\n                     # ends with a blank line
+            """ % less_than_tab, re.M | re.X)
+
+        return _pyshell_block_re.sub(self._pyshell_block_sub, text)
+
+    def _run_span_gamut(self, text):
+        # These are all the transformations that occur *within* block-level
+        # tags like paragraphs, headers, and list items.
+    
+        text = self._do_code_spans(text)
+    
+        text = self._escape_special_chars(text)
+    
+        # Process anchor and image tags.
+        text = self._do_links(text)
+    
+        # Make links out of things like `<http://example.com/>`
+        # Must come after _do_links(), because you can use < and >
+        # delimiters in inline links like [this](<url>).
+        text = self._do_auto_links(text)
+
+        if "link-patterns" in self.extras:
+            text = self._do_link_patterns(text)
+    
+        text = self._encode_amps_and_angles(text)
+    
+        text = self._do_italics_and_bold(text)
+    
+        # Do hard breaks:
+        text = re.sub(r" {2,}\n", " <br%s\n" % self.empty_element_suffix, text)
+    
+        return text
+
+    # "Sorta" because auto-links are identified as "tag" tokens.
+    _sorta_html_tokenize_re = re.compile(r"""
+        (
+            # tag
+            </?         
+            (?:\w+)                                     # tag name
+            (?:\s+(?:[\w-]+:)?[\w-]+=(?:".*?"|'.*?'))*  # attributes
+            \s*/?>
+            |
+            # auto-link (e.g., <http://www.activestate.com/>)
+            <\w+[^>]*>
+            |
+            <!--.*?-->      # comment
+            |
+            <\?.*?\?>       # processing instruction
+        )
+        """, re.X)
+    
+    def _escape_special_chars(self, text):
+        # Python markdown note: the HTML tokenization here differs from
+        # that in Markdown.pl, hence the behaviour for subtle cases can
+        # differ (I believe the tokenizer here does a better job because
+        # it isn't susceptible to unmatched '<' and '>' in HTML tags).
+        # Note, however, that '>' is not allowed in an auto-link URL
+        # here.
+        escaped = []
+        is_html_markup = False
+        for token in self._sorta_html_tokenize_re.split(text):
+            if is_html_markup:
+                # Within tags/HTML-comments/auto-links, encode * and _
+                # so they don't conflict with their use in Markdown for
+                # italics and strong.  We're replacing each such
+                # character with its corresponding MD5 checksum value;
+                # this is likely overkill, but it should prevent us from
+                # colliding with the escape values by accident.
+                escaped.append(token.replace('*', g_escape_table['*'])
+                                    .replace('_', g_escape_table['_']))
+            else:
+                escaped.append(self._encode_backslash_escapes(token))
+            is_html_markup = not is_html_markup
+        return ''.join(escaped)
+
+    def _hash_html_spans(self, text):
+        # Used for safe_mode.
+
+        def _is_auto_link(s):
+            if ':' in s and self._auto_link_re.match(s):
+                return True
+            elif '@' in s and self._auto_email_link_re.match(s):
+                return True
+            return False
+
+        tokens = []
+        is_html_markup = False
+        for token in self._sorta_html_tokenize_re.split(text):
+            if is_html_markup and not _is_auto_link(token):
+                sanitized = self._sanitize_html(token)
+                key = _hash_text(sanitized)
+                self.html_spans[key] = sanitized
+                tokens.append(key)
+            else:
+                tokens.append(token)
+            is_html_markup = not is_html_markup
+        return ''.join(tokens)
+
+    def _unhash_html_spans(self, text):
+        for key, sanitized in self.html_spans.items():
+            text = text.replace(key, sanitized)
+        return text
+
+    def _sanitize_html(self, s):
+        if self.safe_mode == "replace":
+            return self.html_removed_text
+        elif self.safe_mode == "escape":
+            replacements = [
+                ('&', '&amp;'),
+                ('<', '&lt;'),
+                ('>', '&gt;'),
+            ]
+            for before, after in replacements:
+                s = s.replace(before, after)
+            return s
+        else:
+            raise MarkdownError("invalid value for 'safe_mode': %r (must be "
+                                "'escape' or 'replace')" % self.safe_mode)
+
+    _tail_of_inline_link_re = re.compile(r'''
+          # Match tail of: [text](/url/) or [text](/url/ "title")
+          \(            # literal paren
+            [ \t]*
+            (?P<url>            # \1
+                <.*?>
+                |
+                .*?
+            )
+            [ \t]*
+            (                   # \2
+              (['"])            # quote char = \3
+              (?P<title>.*?)
+              \3                # matching quote
+            )?                  # title is optional
+          \)
+        ''', re.X | re.S)
+    _tail_of_reference_link_re = re.compile(r'''
+          # Match tail of: [text][id]
+          [ ]?          # one optional space
+          (?:\n[ ]*)?   # one optional newline followed by spaces
+          \[
+            (?P<id>.*?)
+          \]
+        ''', re.X | re.S)
+
+    def _do_links(self, text):
+        """Turn Markdown link shortcuts into XHTML <a> and <img> tags.
+
+        This is a combination of Markdown.pl's _DoAnchors() and
+        _DoImages(). They are done together because that simplified the
+        approach. It was necessary to use a different approach than
+        Markdown.pl because of the lack of atomic matching support in
+        Python's regex engine used in $g_nested_brackets.
+        """
+        MAX_LINK_TEXT_SENTINEL = 3000  # markdown2 issue 24
+
+        # `anchor_allowed_pos` is used to support img links inside
+        # anchors, but not anchors inside anchors. An anchor's start
+        # pos must be `>= anchor_allowed_pos`.
+        anchor_allowed_pos = 0
+
+        curr_pos = 0
+        while True: # Handle the next link.
+            # The next '[' is the start of:
+            # - an inline anchor:   [text](url "title")
+            # - a reference anchor: [text][id]
+            # - an inline img:      ![text](url "title")
+            # - a reference img:    ![text][id]
+            # - a footnote ref:     [^id]
+            #   (Only if 'footnotes' extra enabled)
+            # - a footnote defn:    [^id]: ...
+            #   (Only if 'footnotes' extra enabled) These have already
+            #   been stripped in _strip_footnote_definitions() so no
+            #   need to watch for them.
+            # - a link definition:  [id]: url "title"
+            #   These have already been stripped in
+            #   _strip_link_definitions() so no need to watch for them.
+            # - not markup:         [...anything else...
+            try:
+                start_idx = text.index('[', curr_pos)
+            except ValueError:
+                break
+            text_length = len(text)
+
+            # Find the matching closing ']'.
+            # Markdown.pl allows *matching* brackets in link text so we
+            # will here too. Markdown.pl *doesn't* currently allow
+            # matching brackets in img alt text -- we'll differ in that
+            # regard.
+            bracket_depth = 0
+            for p in range(start_idx+1, min(start_idx+MAX_LINK_TEXT_SENTINEL, 
+                                            text_length)):
+                ch = text[p]
+                if ch == ']':
+                    bracket_depth -= 1
+                    if bracket_depth < 0:
+                        break
+                elif ch == '[':
+                    bracket_depth += 1
+            else:
+                # Closing bracket not found within sentinel length.
+                # This isn't markup.
+                curr_pos = start_idx + 1
+                continue
+            link_text = text[start_idx+1:p]
+
+            # Possibly a footnote ref?
+            if "footnotes" in self.extras and link_text.startswith("^"):
+                normed_id = re.sub(r'\W', '-', link_text[1:])
+                if normed_id in self.footnotes:
+                    self.footnote_ids.append(normed_id)
+                    result = '<sup class="footnote-ref" id="fnref-%s">' \
+                             '<a href="#fn-%s">%s</a></sup>' \
+                             % (normed_id, normed_id, len(self.footnote_ids))
+                    text = text[:start_idx] + result + text[p+1:]
+                else:
+                    # This id isn't defined, leave the markup alone.
+                    curr_pos = p+1
+                continue
+
+            # Now determine what this is by the remainder.
+            p += 1
+            if p == text_length:
+                return text
+
+            # Inline anchor or img?
+            if text[p] == '(': # attempt at perf improvement
+                match = self._tail_of_inline_link_re.match(text, p)
+                if match:
+                    # Handle an inline anchor or img.
+                    is_img = start_idx > 0 and text[start_idx-1] == "!"
+                    if is_img:
+                        start_idx -= 1
+
+                    url, title = match.group("url"), match.group("title")
+                    if url and url[0] == '<':
+                        url = url[1:-1]  # '<url>' -> 'url'
+                    # We've got to encode these to avoid conflicting
+                    # with italics/bold.
+                    url = url.replace('*', g_escape_table['*']) \
+                             .replace('_', g_escape_table['_'])
+                    if title:
+                        title_str = ' title="%s"' \
+                            % title.replace('*', g_escape_table['*']) \
+                                   .replace('_', g_escape_table['_']) \
+                                   .replace('"', '&quot;')
+                    else:
+                        title_str = ''
+                    if is_img:
+                        if 'imgless' in self.extras:
+                            result = '[Image: <a href="%s" alt="a link to an image">%s</a> (%s)]' \
+                                % (url.replace('"', '&quot;'),
+                                   title,
+                                   link_text.replace('"', '&quot;'))
+                        else:
+                            result = '<img src="%s" alt="%s"%s%s' \
+                                % (url.replace('"', '&quot;'),
+                                   link_text.replace('"', '&quot;'),
+                                   title_str, self.empty_element_suffix)
+                        curr_pos = start_idx + len(result)
+                        text = text[:start_idx] + result + text[match.end():]
+                    elif start_idx >= anchor_allowed_pos:
+                        result_head = '<a href="%s"%s>' % (url, title_str)
+                        result = '%s%s</a>' % (result_head, link_text)
+                        # <img> allowed from curr_pos on, <a> from
+                        # anchor_allowed_pos on.
+                        curr_pos = start_idx + len(result_head)
+                        anchor_allowed_pos = start_idx + len(result)
+                        text = text[:start_idx] + result + text[match.end():]
+                    else:
+                        # Anchor not allowed here.
+                        curr_pos = start_idx + 1
+                    continue
+
+            # Reference anchor or img?
+            else:
+                match = self._tail_of_reference_link_re.match(text, p)
+                if match:
+                    # Handle a reference-style anchor or img.
+                    is_img = start_idx > 0 and text[start_idx-1] == "!"
+                    if is_img:
+                        start_idx -= 1
+                    link_id = match.group("id").lower()
+                    if not link_id:
+                        link_id = link_text.lower()  # for links like [this][]
+                    if link_id in self.urls:
+                        url = self.urls[link_id]
+                        # We've got to encode these to avoid conflicting
+                        # with italics/bold.
+                        url = url.replace('*', g_escape_table['*']) \
+                                 .replace('_', g_escape_table['_'])
+                        title = self.titles.get(link_id)
+                        if title:
+                            title = title.replace('*', g_escape_table['*']) \
+                                         .replace('_', g_escape_table['_'])
+                            title_str = ' title="%s"' % title
+                        else:
+                            title_str = ''
+                        if is_img:
+                            if 'imgless' in self.extras:
+                                result = '[Image: <a href="%s" alt="a link to an image">%s</a> (%s)]' \
+                                    % (url.replace('"', '&quot;'),
+                                       title,
+                                       link_text.replace('"', '&quot;'))
+                            else:
+                                result = '<img src="%s" alt="%s"%s%s' \
+                                    % (url.replace('"', '&quot;'),
+                                       link_text.replace('"', '&quot;'),
+                                       title_str, self.empty_element_suffix)
+                            curr_pos = start_idx + len(result)
+                            text = text[:start_idx] + result + text[match.end():]
+                        elif start_idx >= anchor_allowed_pos:
+                            result = '<a href="%s"%s>%s</a>' \
+                                % (url, title_str, link_text)
+                            result_head = '<a href="%s"%s>' % (url, title_str)
+                            result = '%s%s</a>' % (result_head, link_text)
+                            # <img> allowed from curr_pos on, <a> from
+                            # anchor_allowed_pos on.
+                            curr_pos = start_idx + len(result_head)
+                            anchor_allowed_pos = start_idx + len(result)
+                            text = text[:start_idx] + result + text[match.end():]
+                        else:
+                            # Anchor not allowed here.
+                            curr_pos = start_idx + 1
+                    else:
+                        # This id isn't defined, leave the markup alone.
+                        curr_pos = match.end()
+                    continue
+
+            # Otherwise, it isn't markup.
+            curr_pos = start_idx + 1
+
+        return text 
+
+    def header_id_from_text(self, text, prefix):
+        """Generate a header id attribute value from the given header
+        HTML content.
+        
+        This is only called if the "header-ids" extra is enabled.
+        Subclasses may override this for different header ids.
+        """
+        header_id = _slugify(text)
+        if prefix:
+            header_id = prefix + '-' + header_id
+        if header_id in self._count_from_header_id:
+            self._count_from_header_id[header_id] += 1
+            header_id += '-%s' % self._count_from_header_id[header_id]
+        else:
+            self._count_from_header_id[header_id] = 1
+        return header_id
+
+    _toc = None
+    def _toc_add_entry(self, level, id, name):
+        if self._toc is None:
+            self._toc = []
+        self._toc.append((level, id, name))
+
+    _setext_h_re = re.compile(r'^(.+)[ \t]*\n(=+|-+)[ \t]*\n+', re.M)
+    def _setext_h_sub(self, match):
+        n = {"=": 1, "-": 2}[match.group(2)[0]]
+        demote_headers = self.extras.get("demote-headers")
+        if demote_headers:
+            n = min(n + demote_headers, 6)
+        header_id_attr = ""
+        if "header-ids" in self.extras:
+            header_id = self.header_id_from_text(match.group(1),
+                prefix=self.extras["header-ids"])
+            header_id_attr = ' id="%s"' % header_id
+        html = self._run_span_gamut(match.group(1))
+        if "toc" in self.extras:
+            self._toc_add_entry(n, header_id, html)
+        return "<h%d%s>%s</h%d>\n\n" % (n, header_id_attr, html, n)
+
+    _atx_h_re = re.compile(r'''
+        ^(\#{1,6})  # \1 = string of #'s
+        [ \t]*
+        (.+?)       # \2 = Header text
+        [ \t]*
+        (?<!\\)     # ensure not an escaped trailing '#'
+        \#*         # optional closing #'s (not counted)
+        \n+
+        ''', re.X | re.M)
+    def _atx_h_sub(self, match):
+        n = len(match.group(1))
+        demote_headers = self.extras.get("demote-headers")
+        if demote_headers:
+            n = min(n + demote_headers, 6)
+        header_id_attr = ""
+        if "header-ids" in self.extras:
+            header_id = self.header_id_from_text(match.group(2),
+                prefix=self.extras["header-ids"])
+            header_id_attr = ' id="%s"' % header_id
+        html = self._run_span_gamut(match.group(2))
+        if "toc" in self.extras:
+            self._toc_add_entry(n, header_id, html)
+        return "<h%d%s>%s</h%d>\n\n" % (n, header_id_attr, html, n)
+
+    def _do_headers(self, text):
+        # Setext-style headers:
+        #     Header 1
+        #     ========
+        #  
+        #     Header 2
+        #     --------
+        text = self._setext_h_re.sub(self._setext_h_sub, text)
+
+        # atx-style headers:
+        #   # Header 1
+        #   ## Header 2
+        #   ## Header 2 with closing hashes ##
+        #   ...
+        #   ###### Header 6
+        text = self._atx_h_re.sub(self._atx_h_sub, text)
+
+        return text
+
+
+    _marker_ul_chars  = '*+-'
+    _marker_any = r'(?:[%s]|\d+\.)' % _marker_ul_chars
+    _marker_ul = '(?:[%s])' % _marker_ul_chars
+    _marker_ol = r'(?:\d+\.)'
+
+    def _list_sub(self, match):
+        lst = match.group(1)
+        lst_type = match.group(3) in self._marker_ul_chars and "ul" or "ol"
+        result = self._process_list_items(lst)
+        if self.list_level:
+            return "<%s>\n%s</%s>\n" % (lst_type, result, lst_type)
+        else:
+            return "<%s>\n%s</%s>\n\n" % (lst_type, result, lst_type)
+
+    def _do_lists(self, text):
+        # Form HTML ordered (numbered) and unordered (bulleted) lists.
+
+        for marker_pat in (self._marker_ul, self._marker_ol):
+            # Re-usable pattern to match any entire ul or ol list:
+            less_than_tab = self.tab_width - 1
+            whole_list = r'''
+                (                   # \1 = whole list
+                  (                 # \2
+                    [ ]{0,%d}
+                    (%s)            # \3 = first list item marker
+                    [ \t]+
+                  )
+                  (?:.+?)
+                  (                 # \4
+                      \Z
+                    |
+                      \n{2,}
+                      (?=\S)
+                      (?!           # Negative lookahead for another list item marker
+                        [ \t]*
+                        %s[ \t]+
+                      )
+                  )
+                )
+            ''' % (less_than_tab, marker_pat, marker_pat)
+        
+            # We use a different prefix before nested lists than top-level lists.
+            # See extended comment in _process_list_items().
+            #
+            # Note: There's a bit of duplication here. My original implementation
+            # created a scalar regex pattern as the conditional result of the test on
+            # $g_list_level, and then only ran the $text =~ s{...}{...}egmx
+            # substitution once, using the scalar as the pattern. This worked,
+            # everywhere except when running under MT on my hosting account at Pair
+            # Networks. There, this caused all rebuilds to be killed by the reaper (or
+            # perhaps they crashed, but that seems incredibly unlikely given that the
+            # same script on the same server ran fine *except* under MT. I've spent
+            # more time trying to figure out why this is happening than I'd like to
+            # admit. My only guess, backed up by the fact that this workaround works,
+            # is that Perl optimizes the substition when it can figure out that the
+            # pattern will never change, and when this optimization isn't on, we run
+            # afoul of the reaper. Thus, the slightly redundant code to that uses two
+            # static s/// patterns rather than one conditional pattern.
+
+            if self.list_level:
+                sub_list_re = re.compile("^"+whole_list, re.X | re.M | re.S)
+                text = sub_list_re.sub(self._list_sub, text)
+            else:
+                list_re = re.compile(r"(?:(?<=\n\n)|\A\n?)"+whole_list,
+                                     re.X | re.M | re.S)
+                text = list_re.sub(self._list_sub, text)
+
+        return text
+    
+    _list_item_re = re.compile(r'''
+        (\n)?                   # leading line = \1
+        (^[ \t]*)               # leading whitespace = \2
+        (?P<marker>%s) [ \t]+   # list marker = \3
+        ((?:.+?)                # list item text = \4
+         (\n{1,2}))             # eols = \5
+        (?= \n* (\Z | \2 (?P<next_marker>%s) [ \t]+))
+        ''' % (_marker_any, _marker_any),
+        re.M | re.X | re.S)
+
+    _last_li_endswith_two_eols = False
+    def _list_item_sub(self, match):
+        item = match.group(4)
+        leading_line = match.group(1)
+        leading_space = match.group(2)
+        if leading_line or "\n\n" in item or self._last_li_endswith_two_eols:
+            item = self._run_block_gamut(self._outdent(item))
+        else:
+            # Recursion for sub-lists:
+            item = self._do_lists(self._outdent(item))
+            if item.endswith('\n'):
+                item = item[:-1]
+            item = self._run_span_gamut(item)
+        self._last_li_endswith_two_eols = (len(match.group(5)) == 2)
+        return "<li>%s</li>\n" % item
+
+    def _process_list_items(self, list_str):
+        # Process the contents of a single ordered or unordered list,
+        # splitting it into individual list items.
+    
+        # The $g_list_level global keeps track of when we're inside a list.
+        # Each time we enter a list, we increment it; when we leave a list,
+        # we decrement. If it's zero, we're not in a list anymore.
+        #
+        # We do this because when we're not inside a list, we want to treat
+        # something like this:
+        #
+        #       I recommend upgrading to version
+        #       8. Oops, now this line is treated
+        #       as a sub-list.
+        #
+        # As a single paragraph, despite the fact that the second line starts
+        # with a digit-period-space sequence.
+        #
+        # Whereas when we're inside a list (or sub-list), that line will be
+        # treated as the start of a sub-list. What a kludge, huh? This is
+        # an aspect of Markdown's syntax that's hard to parse perfectly
+        # without resorting to mind-reading. Perhaps the solution is to
+        # change the syntax rules such that sub-lists must start with a
+        # starting cardinal number; e.g. "1." or "a.".
+        self.list_level += 1
+        self._last_li_endswith_two_eols = False
+        list_str = list_str.rstrip('\n') + '\n'
+        list_str = self._list_item_re.sub(self._list_item_sub, list_str)
+        self.list_level -= 1
+        return list_str
+
+    def _get_pygments_lexer(self, lexer_name):
+        try:
+            from pygments import lexers, util
+        except ImportError:
+            return None
+        try:
+            return lexers.get_lexer_by_name(lexer_name)
+        except util.ClassNotFound:
+            return None
+
+    def _color_with_pygments(self, codeblock, lexer, **formatter_opts):
+        import pygments
+        import pygments.formatters
+
+        class HtmlCodeFormatter(pygments.formatters.HtmlFormatter):
+            def _wrap_code(self, inner):
+                """A function for use in a Pygments Formatter which
+                wraps in <code> tags.
+                """
+                yield 0, "<code>"
+                for tup in inner:
+                    yield tup 
+                yield 0, "</code>"
+
+            def wrap(self, source, outfile):
+                """Return the source with a code, pre, and div."""
+                return self._wrap_div(self._wrap_pre(self._wrap_code(source)))
+
+        formatter = HtmlCodeFormatter(cssclass="codehilite", **formatter_opts)
+        return pygments.highlight(codeblock, lexer, formatter)
+
+    def _code_block_sub(self, match):
+        codeblock = match.group(1)
+        codeblock = self._outdent(codeblock)
+        codeblock = self._detab(codeblock)
+        codeblock = codeblock.lstrip('\n')  # trim leading newlines
+        codeblock = codeblock.rstrip()      # trim trailing whitespace
+
+        if "code-color" in self.extras and codeblock.startswith(":::"):
+            lexer_name, rest = codeblock.split('\n', 1)
+            lexer_name = lexer_name[3:].strip()
+            lexer = self._get_pygments_lexer(lexer_name)
+            codeblock = rest.lstrip("\n")   # Remove lexer declaration line.
+            if lexer:
+                formatter_opts = self.extras['code-color'] or {}
+                colored = self._color_with_pygments(codeblock, lexer,
+                                                    **formatter_opts)
+                return "\n\n%s\n\n" % colored
+
+        codeblock = self._encode_code(codeblock)
+        pre_class_str = self._html_class_str_from_tag("pre")
+        code_class_str = self._html_class_str_from_tag("code")
+        return "\n\n<pre%s><code%s>%s\n</code></pre>\n\n" % (
+            pre_class_str, code_class_str, codeblock)
+
+    def _html_class_str_from_tag(self, tag):
+        """Get the appropriate ' class="..."' string (note the leading
+        space), if any, for the given tag.
+        """
+        if "html-classes" not in self.extras:
+            return ""
+        try:
+            html_classes_from_tag = self.extras["html-classes"]
+        except TypeError:
+            return ""
+        else:
+            if tag in html_classes_from_tag:
+                return ' class="%s"' % html_classes_from_tag[tag]
+        return ""
+
+    def _do_code_blocks(self, text):
+        """Process Markdown `<pre><code>` blocks."""
+        code_block_re = re.compile(r'''
+            (?:\n\n|\A)
+            (               # $1 = the code block -- one or more lines, starting with a space/tab
+              (?:
+                (?:[ ]{%d} | \t)  # Lines must start with a tab or a tab-width of spaces
+                .*\n+
+              )+
+            )
+            ((?=^[ ]{0,%d}\S)|\Z)   # Lookahead for non-space at line-start, or end of doc
+            ''' % (self.tab_width, self.tab_width),
+            re.M | re.X)
+
+        return code_block_re.sub(self._code_block_sub, text)
+
+
+    # Rules for a code span:
+    # - backslash escapes are not interpreted in a code span
+    # - to include one or or a run of more backticks the delimiters must
+    #   be a longer run of backticks
+    # - cannot start or end a code span with a backtick; pad with a
+    #   space and that space will be removed in the emitted HTML
+    # See `test/tm-cases/escapes.text` for a number of edge-case
+    # examples.
+    _code_span_re = re.compile(r'''
+            (?<!\\)
+            (`+)        # \1 = Opening run of `
+            (?!`)       # See Note A test/tm-cases/escapes.text
+            (.+?)       # \2 = The code block
+            (?<!`)
+            \1          # Matching closer
+            (?!`)
+        ''', re.X | re.S)
+
+    def _code_span_sub(self, match):
+        c = match.group(2).strip(" \t")
+        c = self._encode_code(c)
+        return "<code>%s</code>" % c
+
+    def _do_code_spans(self, text):
+        #   *   Backtick quotes are used for <code></code> spans.
+        # 
+        #   *   You can use multiple backticks as the delimiters if you want to
+        #       include literal backticks in the code span. So, this input:
+        #     
+        #         Just type ``foo `bar` baz`` at the prompt.
+        #     
+        #       Will translate to:
+        #     
+        #         <p>Just type <code>foo `bar` baz</code> at the prompt.</p>
+        #     
+        #       There's no arbitrary limit to the number of backticks you
+        #       can use as delimters. If you need three consecutive backticks
+        #       in your code, use four for delimiters, etc.
+        #
+        #   *   You can use spaces to get literal backticks at the edges:
+        #     
+        #         ... type `` `bar` `` ...
+        #     
+        #       Turns to:
+        #     
+        #         ... type <code>`bar`</code> ...
+        return self._code_span_re.sub(self._code_span_sub, text)
+
+    def _encode_code(self, text):
+        """Encode/escape certain characters inside Markdown code runs.
+        The point is that in code, these characters are literals,
+        and lose their special Markdown meanings.
+        """
+        replacements = [
+            # Encode all ampersands; HTML entities are not
+            # entities within a Markdown code span.
+            ('&', '&amp;'),
+            # Do the angle bracket song and dance:
+            ('<', '&lt;'),
+            ('>', '&gt;'),
+            # Now, escape characters that are magic in Markdown:
+            ('*', g_escape_table['*']),
+            ('_', g_escape_table['_']),
+            ('{', g_escape_table['{']),
+            ('}', g_escape_table['}']),
+            ('[', g_escape_table['[']),
+            (']', g_escape_table[']']),
+            ('\\', g_escape_table['\\']),
+        ]
+        for before, after in replacements:
+            text = text.replace(before, after)
+        return text
+
+    _strong_re = re.compile(r"(\*\*|__)(?=\S)(.+?[*_]*)(?<=\S)\1", re.S)
+    _em_re = re.compile(r"(\*|_)(?=\S)(.+?)(?<=\S)\1", re.S)
+    _code_friendly_strong_re = re.compile(r"\*\*(?=\S)(.+?[*_]*)(?<=\S)\*\*", re.S)
+    _code_friendly_em_re = re.compile(r"\*(?=\S)(.+?)(?<=\S)\*", re.S)
+    def _do_italics_and_bold(self, text):
+        # <strong> must go first:
+        if "code-friendly" in self.extras:
+            text = self._code_friendly_strong_re.sub(r"<strong>\1</strong>", text)
+            text = self._code_friendly_em_re.sub(r"<em>\1</em>", text)
+        else:
+            text = self._strong_re.sub(r"<strong>\2</strong>", text)
+            text = self._em_re.sub(r"<em>\2</em>", text)
+        return text
+    
+
+    _block_quote_re = re.compile(r'''
+        (                           # Wrap whole match in \1
+          (
+            ^[ \t]*>[ \t]?          # '>' at the start of a line
+              .+\n                  # rest of the first line
+            (.+\n)*                 # subsequent consecutive lines
+            \n*                     # blanks
+          )+
+        )
+        ''', re.M | re.X)
+    _bq_one_level_re = re.compile('^[ \t]*>[ \t]?', re.M);
+
+    _html_pre_block_re = re.compile(r'(\s*<pre>.+?</pre>)', re.S)
+    def _dedent_two_spaces_sub(self, match):
+        return re.sub(r'(?m)^  ', '', match.group(1))
+
+    def _block_quote_sub(self, match):
+        bq = match.group(1)
+        bq = self._bq_one_level_re.sub('', bq)  # trim one level of quoting
+        bq = self._ws_only_line_re.sub('', bq)  # trim whitespace-only lines
+        bq = self._run_block_gamut(bq)          # recurse
+
+        bq = re.sub('(?m)^', '  ', bq)
+        # These leading spaces screw with <pre> content, so we need to fix that:
+        bq = self._html_pre_block_re.sub(self._dedent_two_spaces_sub, bq)
+
+        return "<blockquote>\n%s\n</blockquote>\n\n" % bq
+
+    def _do_block_quotes(self, text):
+        if '>' not in text:
+            return text
+        return self._block_quote_re.sub(self._block_quote_sub, text)
+
+    def _form_paragraphs(self, text):
+        # Strip leading and trailing lines:
+        text = text.strip('\n')
+
+        # Wrap <p> tags.
+        grafs = []
+        for i, graf in enumerate(re.split(r"\n{2,}", text)):
+            if graf in self.html_blocks:
+                # Unhashify HTML blocks
+                grafs.append(self.html_blocks[graf])
+            else:
+                cuddled_list = None
+                if "cuddled-lists" in self.extras:
+                    # Need to put back trailing '\n' for `_list_item_re`
+                    # match at the end of the paragraph.
+                    li = self._list_item_re.search(graf + '\n')
+                    # Two of the same list marker in this paragraph: a likely
+                    # candidate for a list cuddled to preceding paragraph
+                    # text (issue 33). Note the `[-1]` is a quick way to
+                    # consider numeric bullets (e.g. "1." and "2.") to be
+                    # equal.
+                    if (li and len(li.group(2)) <= 3 and li.group("next_marker")
+                        and li.group("marker")[-1] == li.group("next_marker")[-1]):
+                        start = li.start()
+                        cuddled_list = self._do_lists(graf[start:]).rstrip("\n")
+                        assert cuddled_list.startswith("<ul>") or cuddled_list.startswith("<ol>")
+                        graf = graf[:start]
+                    
+                # Wrap <p> tags.
+                graf = self._run_span_gamut(graf)
+                grafs.append("<p>" + graf.lstrip(" \t") + "</p>")
+                
+                if cuddled_list:
+                    grafs.append(cuddled_list)
+
+        return "\n\n".join(grafs)
+
+    def _add_footnotes(self, text):
+        if self.footnotes:
+            footer = [
+                '<div class="footnotes">',
+                '<hr' + self.empty_element_suffix,
+                '<ol>',
+            ]
+            for i, id in enumerate(self.footnote_ids):
+                if i != 0:
+                    footer.append('')
+                footer.append('<li id="fn-%s">' % id)
+                footer.append(self._run_block_gamut(self.footnotes[id]))
+                backlink = ('<a href="#fnref-%s" '
+                    'class="footnoteBackLink" '
+                    'title="Jump back to footnote %d in the text.">'
+                    '&#8617;</a>' % (id, i+1))
+                if footer[-1].endswith("</p>"):
+                    footer[-1] = footer[-1][:-len("</p>")] \
+                        + '&nbsp;' + backlink + "</p>"
+                else:
+                    footer.append("\n<p>%s</p>" % backlink)
+                footer.append('</li>')
+            footer.append('</ol>')
+            footer.append('</div>')
+            return text + '\n\n' + '\n'.join(footer)
+        else:
+            return text
+
+    # Ampersand-encoding based entirely on Nat Irons's Amputator MT plugin:
+    #   http://bumppo.net/projects/amputator/
+    _ampersand_re = re.compile(r'&(?!#?[xX]?(?:[0-9a-fA-F]+|\w+);)')
+    _naked_lt_re = re.compile(r'<(?![a-z/?\$!])', re.I)
+    _naked_gt_re = re.compile(r'''(?<![a-z?!/'"-])>''', re.I)
+
+    def _encode_amps_and_angles(self, text):
+        # Smart processing for ampersands and angle brackets that need
+        # to be encoded.
+        text = self._ampersand_re.sub('&amp;', text)
+    
+        # Encode naked <'s
+        text = self._naked_lt_re.sub('&lt;', text)
+
+        # Encode naked >'s
+        # Note: Other markdown implementations (e.g. Markdown.pl, PHP
+        # Markdown) don't do this.
+        text = self._naked_gt_re.sub('&gt;', text)
+        return text
+
+    def _encode_backslash_escapes(self, text):
+        for ch, escape in g_escape_table.items():
+            text = text.replace("\\"+ch, escape)
+        return text
+
+    _auto_link_re = re.compile(r'<((https?|ftp):[^\'">\s]+)>', re.I)
+    def _auto_link_sub(self, match):
+        g1 = match.group(1)
+        return '<a href="%s">%s</a>' % (g1, g1)
+
+    _auto_email_link_re = re.compile(r"""
+          <
+           (?:mailto:)?
+          (
+              [-.\w]+
+              \@
+              [-\w]+(\.[-\w]+)*\.[a-z]+
+          )
+          >
+        """, re.I | re.X | re.U)
+    def _auto_email_link_sub(self, match):
+        return self._encode_email_address(
+            self._unescape_special_chars(match.group(1)))
+
+    def _do_auto_links(self, text):
+        text = self._auto_link_re.sub(self._auto_link_sub, text)
+        text = self._auto_email_link_re.sub(self._auto_email_link_sub, text)
+        return text
+
+    def _encode_email_address(self, addr):
+        #  Input: an email address, e.g. "foo@example.com"
+        #
+        #  Output: the email address as a mailto link, with each character
+        #      of the address encoded as either a decimal or hex entity, in
+        #      the hopes of foiling most address harvesting spam bots. E.g.:
+        #
+        #    <a href="&#x6D;&#97;&#105;&#108;&#x74;&#111;:&#102;&#111;&#111;&#64;&#101;
+        #       x&#x61;&#109;&#x70;&#108;&#x65;&#x2E;&#99;&#111;&#109;">&#102;&#111;&#111;
+        #       &#64;&#101;x&#x61;&#109;&#x70;&#108;&#x65;&#x2E;&#99;&#111;&#109;</a>
+        #
+        #  Based on a filter by Matthew Wickline, posted to the BBEdit-Talk
+        #  mailing list: <http://tinyurl.com/yu7ue>
+        chars = [_xml_encode_email_char_at_random(ch)
+                 for ch in "mailto:" + addr]
+        # Strip the mailto: from the visible part.
+        addr = '<a href="%s">%s</a>' \
+               % (''.join(chars), ''.join(chars[7:]))
+        return addr
+    
+    def _do_link_patterns(self, text):
+        """Caveat emptor: there isn't much guarding against link
+        patterns being formed inside other standard Markdown links, e.g.
+        inside a [link def][like this].
+
+        Dev Notes: *Could* consider prefixing regexes with a negative
+        lookbehind assertion to attempt to guard against this.
+        """
+        link_from_hash = {}
+        for regex, repl in self.link_patterns:
+            replacements = []
+            for match in regex.finditer(text):
+                if hasattr(repl, "__call__"):
+                    href = repl(match)
+                else:
+                    href = match.expand(repl)
+                replacements.append((match.span(), href))
+            for (start, end), href in reversed(replacements):
+                escaped_href = (
+                    href.replace('"', '&quot;')  # b/c of attr quote
+                        # To avoid markdown <em> and <strong>:
+                        .replace('*', g_escape_table['*'])
+                        .replace('_', g_escape_table['_']))
+                link = '<a href="%s">%s</a>' % (escaped_href, text[start:end])
+                hash = _hash_text(link)
+                link_from_hash[hash] = link
+                text = text[:start] + hash + text[end:]
+        for hash, link in link_from_hash.items():
+            text = text.replace(hash, link)
+        return text
+    
+    def _unescape_special_chars(self, text):
+        # Swap back in all the special characters we've hidden.
+        for ch, hash in g_escape_table.items():
+            text = text.replace(hash, ch)
+        return text
+
+    def _outdent(self, text):
+        # Remove one level of line-leading tabs or spaces
+        return self._outdent_re.sub('', text)
+
+
+class MarkdownWithExtras(Markdown):
+    """A markdowner class that enables most extras:
+
+    - footnotes
+    - code-color (only has effect if 'pygments' Python module on path)
+
+    These are not included:
+    - pyshell (specific to Python-related documenting)
+    - code-friendly (because it *disables* part of the syntax)
+    - link-patterns (because you need to specify some actual
+      link-patterns anyway)
+    """
+    extras = ["footnotes", "code-color"]
+
+
+#---- internal support functions
+
+class UnicodeWithAttrs(unicode):
+    """A subclass of unicode used for the return value of conversion to
+    possibly attach some attributes. E.g. the "toc_html" attribute when
+    the "toc" extra is used.
+    """
+    _toc = None
+    @property
+    def toc_html(self):
+        """Return the HTML for the current TOC.
+        
+        This expects the `_toc` attribute to have been set on this instance.
+        """
+        if self._toc is None:
+            return None
+        
+        def indent():
+            return '  ' * (len(h_stack) - 1)
+        lines = []
+        h_stack = [0]   # stack of header-level numbers
+        for level, id, name in self._toc:
+            if level > h_stack[-1]:
+                lines.append("%s<ul>" % indent())
+                h_stack.append(level)
+            elif level == h_stack[-1]:
+                lines[-1] += "</li>"
+            else:
+                while level < h_stack[-1]:
+                    h_stack.pop()
+                    if not lines[-1].endswith("</li>"):
+                        lines[-1] += "</li>"
+                    lines.append("%s</ul></li>" % indent())
+            lines.append(u'%s<li><a href="#%s">%s</a>' % (
+                indent(), id, name))
+        while len(h_stack) > 1:
+            h_stack.pop()
+            if not lines[-1].endswith("</li>"):
+                lines[-1] += "</li>"
+            lines.append("%s</ul>" % indent())
+        return '\n'.join(lines) + '\n'
+
+
+_slugify_strip_re = re.compile(r'[^\w\s-]')
+_slugify_hyphenate_re = re.compile(r'[-\s]+')
+def _slugify(value):
+    """
+    Normalizes string, converts to lowercase, removes non-alpha characters,
+    and converts spaces to hyphens.
+    
+    From Django's "django/template/defaultfilters.py".
+    """
+    import unicodedata
+    value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore')
+    value = unicode(_slugify_strip_re.sub('', value).strip().lower())
+    return _slugify_hyphenate_re.sub('-', value)
+
+# From http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52549
+def _curry(*args, **kwargs):
+    function, args = args[0], args[1:]
+    def result(*rest, **kwrest):
+        combined = kwargs.copy()
+        combined.update(kwrest)
+        return function(*args + rest, **combined)
+    return result
+
+# Recipe: regex_from_encoded_pattern (1.0)
+def _regex_from_encoded_pattern(s):
+    """'foo'    -> re.compile(re.escape('foo'))
+       '/foo/'  -> re.compile('foo')
+       '/foo/i' -> re.compile('foo', re.I)
+    """
+    if s.startswith('/') and s.rfind('/') != 0:
+        # Parse it: /PATTERN/FLAGS
+        idx = s.rfind('/')
+        pattern, flags_str = s[1:idx], s[idx+1:]
+        flag_from_char = {
+            "i": re.IGNORECASE,
+            "l": re.LOCALE,
+            "s": re.DOTALL,
+            "m": re.MULTILINE,
+            "u": re.UNICODE,
+        }
+        flags = 0
+        for char in flags_str:
+            try:
+                flags |= flag_from_char[char]
+            except KeyError:
+                raise ValueError("unsupported regex flag: '%s' in '%s' "
+                                 "(must be one of '%s')"
+                                 % (char, s, ''.join(flag_from_char.keys())))
+        return re.compile(s[1:idx], flags)
+    else: # not an encoded regex
+        return re.compile(re.escape(s))
+
+# Recipe: dedent (0.1.2)
+def _dedentlines(lines, tabsize=8, skip_first_line=False):
+    """_dedentlines(lines, tabsize=8, skip_first_line=False) -> dedented lines
+    
+        "lines" is a list of lines to dedent.
+        "tabsize" is the tab width to use for indent width calculations.
+        "skip_first_line" is a boolean indicating if the first line should
+            be skipped for calculating the indent width and for dedenting.
+            This is sometimes useful for docstrings and similar.
+    
+    Same as dedent() except operates on a sequence of lines. Note: the
+    lines list is modified **in-place**.
+    """
+    DEBUG = False
+    if DEBUG: 
+        print "dedent: dedent(..., tabsize=%d, skip_first_line=%r)"\
+              % (tabsize, skip_first_line)
+    indents = []
+    margin = None
+    for i, line in enumerate(lines):
+        if i == 0 and skip_first_line: continue
+        indent = 0
+        for ch in line:
+            if ch == ' ':
+                indent += 1
+            elif ch == '\t':
+                indent += tabsize - (indent % tabsize)
+            elif ch in '\r\n':
+                continue # skip all-whitespace lines
+            else:
+                break
+        else:
+            continue # skip all-whitespace lines
+        if DEBUG: print "dedent: indent=%d: %r" % (indent, line)
+        if margin is None:
+            margin = indent
+        else:
+            margin = min(margin, indent)
+    if DEBUG: print "dedent: margin=%r" % margin
+
+    if margin is not None and margin > 0:
+        for i, line in enumerate(lines):
+            if i == 0 and skip_first_line: continue
+            removed = 0
+            for j, ch in enumerate(line):
+                if ch == ' ':
+                    removed += 1
+                elif ch == '\t':
+                    removed += tabsize - (removed % tabsize)
+                elif ch in '\r\n':
+                    if DEBUG: print "dedent: %r: EOL -> strip up to EOL" % line
+                    lines[i] = lines[i][j:]
+                    break
+                else:
+                    raise ValueError("unexpected non-whitespace char %r in "
+                                     "line %r while removing %d-space margin"
+                                     % (ch, line, margin))
+                if DEBUG:
+                    print "dedent: %r: %r -> removed %d/%d"\
+                          % (line, ch, removed, margin)
+                if removed == margin:
+                    lines[i] = lines[i][j+1:]
+                    break
+                elif removed > margin:
+                    lines[i] = ' '*(removed-margin) + lines[i][j+1:]
+                    break
+            else:
+                if removed:
+                    lines[i] = lines[i][removed:]
+    return lines
+
+def _dedent(text, tabsize=8, skip_first_line=False):
+    """_dedent(text, tabsize=8, skip_first_line=False) -> dedented text
+
+        "text" is the text to dedent.
+        "tabsize" is the tab width to use for indent width calculations.
+        "skip_first_line" is a boolean indicating if the first line should
+            be skipped for calculating the indent width and for dedenting.
+            This is sometimes useful for docstrings and similar.
+    
+    textwrap.dedent(s), but don't expand tabs to spaces
+    """
+    lines = text.splitlines(1)
+    _dedentlines(lines, tabsize=tabsize, skip_first_line=skip_first_line)
+    return ''.join(lines)
+
+
+class _memoized(object):
+   """Decorator that caches a function's return value each time it is called.
+   If called later with the same arguments, the cached value is returned, and
+   not re-evaluated.
+
+   http://wiki.python.org/moin/PythonDecoratorLibrary
+   """
+   def __init__(self, func):
+      self.func = func
+      self.cache = {}
+   def __call__(self, *args):
+      try:
+         return self.cache[args]
+      except KeyError:
+         self.cache[args] = value = self.func(*args)
+         return value
+      except TypeError:
+         # uncachable -- for instance, passing a list as an argument.
+         # Better to not cache than to blow up entirely.
+         return self.func(*args)
+   def __repr__(self):
+      """Return the function's docstring."""
+      return self.func.__doc__
+
+
+def _xml_oneliner_re_from_tab_width(tab_width):
+    """Standalone XML processing instruction regex."""
+    return re.compile(r"""
+        (?:
+            (?<=\n\n)       # Starting after a blank line
+            |               # or
+            \A\n?           # the beginning of the doc
+        )
+        (                           # save in $1
+            [ ]{0,%d}
+            (?:
+                <\?\w+\b\s+.*?\?>   # XML processing instruction
+                |
+                <\w+:\w+\b\s+.*?/>  # namespaced single tag
+            )
+            [ \t]*
+            (?=\n{2,}|\Z)       # followed by a blank line or end of document
+        )
+        """ % (tab_width - 1), re.X)
+_xml_oneliner_re_from_tab_width = _memoized(_xml_oneliner_re_from_tab_width)
+
+def _hr_tag_re_from_tab_width(tab_width):
+     return re.compile(r"""
+        (?:
+            (?<=\n\n)       # Starting after a blank line
+            |               # or
+            \A\n?           # the beginning of the doc
+        )
+        (                       # save in \1
+            [ ]{0,%d}
+            <(hr)               # start tag = \2
+            \b                  # word break
+            ([^<>])*?           # 
+            /?>                 # the matching end tag
+            [ \t]*
+            (?=\n{2,}|\Z)       # followed by a blank line or end of document
+        )
+        """ % (tab_width - 1), re.X)
+_hr_tag_re_from_tab_width = _memoized(_hr_tag_re_from_tab_width)
+
+
+def _xml_encode_email_char_at_random(ch):
+    r = random()
+    # Roughly 10% raw, 45% hex, 45% dec.
+    # '@' *must* be encoded. I [John Gruber] insist.
+    # Issue 26: '_' must be encoded.
+    if r > 0.9 and ch not in "@_":
+        return ch
+    elif r < 0.45:
+        # The [1:] is to drop leading '0': 0x63 -> x63
+        return '&#%s;' % hex(ord(ch))[1:]
+    else:
+        return '&#%s;' % ord(ch)
+
+
+
+#---- mainline
+
+class _NoReflowFormatter(optparse.IndentedHelpFormatter):
+    """An optparse formatter that does NOT reflow the description."""
+    def format_description(self, description):
+        return description or ""
+
+def _test():
+    import doctest
+    doctest.testmod()
+
+def main(argv=None):
+    if argv is None:
+        argv = sys.argv
+    if not logging.root.handlers:
+        logging.basicConfig()
+
+    usage = "usage: %prog [PATHS...]"
+    version = "%prog "+__version__
+    parser = optparse.OptionParser(prog="markdown2", usage=usage,
+        version=version, description=cmdln_desc,
+        formatter=_NoReflowFormatter())
+    parser.add_option("-v", "--verbose", dest="log_level",
+                      action="store_const", const=logging.DEBUG,
+                      help="more verbose output")
+    parser.add_option("--encoding",
+                      help="specify encoding of text content")
+    parser.add_option("--html4tags", action="store_true", default=False, 
+                      help="use HTML 4 style for empty element tags")
+    parser.add_option("-s", "--safe", metavar="MODE", dest="safe_mode",
+                      help="sanitize literal HTML: 'escape' escapes "
+                           "HTML meta chars, 'replace' replaces with an "
+                           "[HTML_REMOVED] note")
+    parser.add_option("-x", "--extras", action="append",
+                      help="Turn on specific extra features (not part of "
+                           "the core Markdown spec). See above.")
+    parser.add_option("--use-file-vars",
+                      help="Look for and use Emacs-style 'markdown-extras' "
+                           "file var to turn on extras. See "
+                           "<http://code.google.com/p/python-markdown2/wiki/Extras>.")
+    parser.add_option("--link-patterns-file",
+                      help="path to a link pattern file")
+    parser.add_option("--self-test", action="store_true",
+                      help="run internal self-tests (some doctests)")
+    parser.add_option("--compare", action="store_true",
+                      help="run against Markdown.pl as well (for testing)")
+    parser.set_defaults(log_level=logging.INFO, compare=False,
+                        encoding="utf-8", safe_mode=None, use_file_vars=False)
+    opts, paths = parser.parse_args()
+    log.setLevel(opts.log_level)
+
+    if opts.self_test:
+        return _test()
+
+    if opts.extras:
+        extras = {}
+        for s in opts.extras:
+            splitter = re.compile("[,;: ]+")
+            for e in splitter.split(s):
+                if '=' in e:
+                    ename, earg = e.split('=', 1)
+                    try:
+                        earg = int(earg)
+                    except ValueError:
+                        pass
+                else:
+                    ename, earg = e, None
+                extras[ename] = earg
+    else:
+        extras = None
+
+    if opts.link_patterns_file:
+        link_patterns = []
+        f = open(opts.link_patterns_file)
+        try:
+            for i, line in enumerate(f.readlines()):
+                if not line.strip(): continue
+                if line.lstrip().startswith("#"): continue
+                try:
+                    pat, href = line.rstrip().rsplit(None, 1)
+                except ValueError:
+                    raise MarkdownError("%s:%d: invalid link pattern line: %r"
+                                        % (opts.link_patterns_file, i+1, line))
+                link_patterns.append(
+                    (_regex_from_encoded_pattern(pat), href))
+        finally:
+            f.close()
+    else:
+        link_patterns = None
+
+    from os.path import join, dirname, abspath, exists
+    markdown_pl = join(dirname(dirname(abspath(__file__))), "test",
+                       "Markdown.pl")
+    for path in paths:
+        if opts.compare:
+            print "==== Markdown.pl ===="
+            perl_cmd = 'perl %s "%s"' % (markdown_pl, path)
+            o = os.popen(perl_cmd)
+            perl_html = o.read()
+            o.close()
+            sys.stdout.write(perl_html)
+            print "==== markdown2.py ===="
+        html = markdown_path(path, encoding=opts.encoding,
+                             html4tags=opts.html4tags,
+                             safe_mode=opts.safe_mode,
+                             extras=extras, link_patterns=link_patterns,
+                             use_file_vars=opts.use_file_vars)
+        sys.stdout.write(
+            html.encode(sys.stdout.encoding or "utf-8", 'xmlcharrefreplace'))
+        if extras and "toc" in extras:
+            log.debug("toc_html: " +
+                html.toc_html.encode(sys.stdout.encoding or "utf-8", 'xmlcharrefreplace'))
+        if opts.compare:
+            test_dir = join(dirname(dirname(abspath(__file__))), "test")
+            if exists(join(test_dir, "test_markdown2.py")):
+                sys.path.insert(0, test_dir)
+                from test_markdown2 import norm_html_from_html
+                norm_html = norm_html_from_html(html)
+                norm_perl_html = norm_html_from_html(perl_html)
+            else:
+                norm_html = html
+                norm_perl_html = perl_html
+            print "==== match? %r ====" % (norm_perl_html == norm_html)
+
+
+if __name__ == "__main__":
+    sys.exit( main(sys.argv) )
+
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/bundled/markdown2/setup.py	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,66 @@
+#!/usr/bin/env python
+
+"""markdown2: A fast and complete Python implementaion of Markdown.
+
+Markdown is a text-to-HTML filter; it translates an easy-to-read /
+easy-to-write structured text format into HTML.  Markdown's text
+format is most similar to that of plain text email, and supports
+features such as headers, *emphasis*, code blocks, blockquotes, and
+links.  -- http://daringfireball.net/projects/markdown/
+
+This is a fast and complete Python implementation of the Markdown
+spec.
+"""
+
+import os
+import sys
+import distutils
+from distutils.core import setup
+
+sys.path.insert(0, os.path.join(os.path.dirname(__file__), "lib"))
+try:
+    import markdown2
+finally:
+    del sys.path[0]
+
+classifiers = """\
+Development Status :: 5 - Production/Stable
+Intended Audience :: Developers
+License :: OSI Approved :: MIT License
+Programming Language :: Python
+Operating System :: OS Independent
+Topic :: Software Development :: Libraries :: Python Modules
+Topic :: Software Development :: Documentation
+Topic :: Text Processing :: Filters
+Topic :: Text Processing :: Markup :: HTML 
+"""
+
+if sys.version_info < (2, 3):
+    # Distutils before Python 2.3 doesn't accept classifiers.
+    _setup = setup
+    def setup(**kwargs):
+        if kwargs.has_key("classifiers"):
+            del kwargs["classifiers"]
+        _setup(**kwargs)
+
+doclines = __doc__.split("\n")
+script = (sys.platform == "win32" and "lib\\markdown2.py" or "bin/markdown2")
+
+setup(
+    name="markdown2",
+    version=markdown2.__version__,
+    maintainer="Trent Mick",
+    maintainer_email="trentm@gmail.com",
+    author="Trent Mick",
+    author_email="trentm@gmail.com",
+    url="http://code.google.com/p/python-markdown2/",
+    license="http://www.opensource.org/licenses/mit-license.php",
+    platforms=["any"],
+    py_modules=["markdown2"],
+    package_dir={"": "lib"},
+    scripts=[script],
+    description=doclines[0],
+    classifiers=filter(None, classifiers.split("\n")),
+    long_description="\n".join(doclines[2:]),
+)
+
--- a/contrib/deploy/wsgi.py	Tue Jun 15 20:30:23 2010 -0400
+++ b/contrib/deploy/wsgi.py	Thu Jul 01 19:32:49 2010 -0400
@@ -2,9 +2,9 @@
 # Edit as necessary.
 
 # If hg-review is not on your webserver's PYTHONPATH, uncomment the lines
-# below and point it at the hg-review/review directory.
+# below and point it at the hg-review directory.
 import sys
-sys.path.insert(0, "/path/to/hg-review/review")
+sys.path.insert(0, "/path/to/hg-review")
 
 REPO = '/path/to/your/repo'
 READ_ONLY = True
@@ -14,7 +14,7 @@
 TITLE = 'Your Project'
 
 from mercurial import hg, ui
-from web_ui import app
+from review.web import app
 
 _ui = ui.ui()
 _ui.setconfig('ui', 'user', ANON_USER)
--- a/review/__init__.py	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/__init__.py	Thu Jul 01 19:32:49 2010 -0400
@@ -1,3 +1,3 @@
 """commands for code reviewing changesets"""
 
-from extension_ui import *
\ No newline at end of file
+from cli import *
--- a/review/api.py	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/api.py	Thu Jul 01 19:32:49 2010 -0400
@@ -3,7 +3,7 @@
 """The API for interacting with code review data."""
 
 import datetime, operator, os
-import file_templates, messages
+import files, messages
 from mercurial import cmdutil, error, hg, patch, util
 from mercurial.node import hex
 from mercurial import ui as _ui
@@ -61,11 +61,7 @@
 
 
 class SignoffExists(Exception):
-    """Raised when trying to signoff twice without forcing."""
-    pass
-
-class CannotDeleteObject(Exception):
-    """Raised when trying to delete an object that does not support deletion."""
+    """Raised when trying to signoff twice."""
     pass
 
 class FileNotInChangeset(Exception):
@@ -75,6 +71,19 @@
         self.filename = filename
 
 
+class AmbiguousIdentifier(Exception):
+    """Raised when trying to specify an item with an identifier which matches more than one item."""
+    pass
+
+class UnknownIdentifier(Exception):
+    """Raised when trying to specify an item with an identifier which does not match any items."""
+    pass
+
+class WrongEditItemType(Exception):
+    """Raised when calling edit_comment with a signoff, or vice versa."""
+    pass
+
+
 def _split_path_dammit(p):
     """Take a file path (from the current platform) and split it.  Really.
 
@@ -158,6 +167,8 @@
     else:
         return bare_datetime - offset
 
+def _flatten_filter(i):
+    return filter(None, reduce(operator.add, i, []))
 
 def sanitize_path(p, repo=None):
     """Sanitize a (platform-specific) path.
@@ -232,7 +243,7 @@
             try:
                 hg.repository(ui, self.lpath)
             except error.RepoError:
-                hg.clone(cmdutil.remoteui(self.ui, {}), self.rpath, self.lpath)
+                hg.clone(hg.remoteui(self.ui, {}), self.rpath, self.lpath)
             else:
                 raise PreexistingDatastore(True)
         elif os.path.exists(os.path.join(self.target.root, '.hgreview')):
@@ -248,7 +259,7 @@
             with open(os.path.join(self.target.root, '.hgreview'), 'w') as hgrf:
                 hgrf.write('remote = %s\n' % self.rpath)
 
-            self.target.add(['.hgreview'])
+            self.target[None].add(['.hgreview'])
             self.repo = hg.repository(ui, self.lpath, create)
 
     def __getitem__(self, rev):
@@ -256,6 +267,82 @@
         node = hex(self.target[str(rev)].node())
         return ReviewChangeset(self.ui, self.repo, self.target, node)
 
+
+    def reviewed_changesets(self):
+        """Return a list of all the ReviewChangesets in the data store."""
+        hashes = []
+        for fname in os.listdir(self.repo.root):
+            if os.path.isdir(os.path.join(self.repo.root, fname)):
+                try:
+                    self.target[fname]
+                    hashes.append(self[fname])
+                except error.RepoLookupError:
+                    pass
+        return hashes
+
+
+    def get_items(self, identifier):
+        """Return the comments and signoffs which match the given identifier.
+        
+        WARNING: This is going to be slow. Send patches.
+        
+        """
+        rcsets = self.reviewed_changesets()
+        comments = _flatten_filter(rcset.comments for rcset in rcsets)
+        signoffs = _flatten_filter(rcset.signoffs for rcset in rcsets)
+        return [i for i in comments + signoffs if i.identifier.startswith(identifier)]
+
+    def remove_item(self, identifier):
+        """Remove a comment or signoff from this changeset."""
+        items = self.get_items(identifier)
+        if len(items) == 0:
+            raise UnknownIdentifier
+        elif len(items) > 1:
+            raise AmbiguousIdentifier
+        else:
+            items[0]._delete(self.ui, self.repo)
+
+    def edit_comment(self, identifier, message=None, filename=None, lines=None, style=None):
+        olds = self.get_items(identifier)
+
+        if len(olds) == 0:
+            raise UnknownIdentifier
+        elif len(olds) > 1:
+            raise AmbiguousIdentifier
+
+        old = olds[0]
+        if old.itemtype != 'comment':
+            raise WrongEditItemType()
+
+        filename = filename if filename is not None else old.filename
+        if filename and filename not in self.target[old.node].files():
+            raise FileNotInChangeset(filename)
+
+        old.hgdate = util.makedate()
+        old.filename = filename
+        old.lines = lines if lines is not None else old.lines
+        old.message = message if message is not None else old.message
+        old.style = style if style is not None else old.style
+        old._rename(self.ui, self.repo, old.identifier)
+
+    def edit_signoff(self, identifier, message=None, opinion=None, style=None):
+        olds = self.get_items(identifier)
+
+        if len(olds) == 0:
+            raise UnknownIdentifier
+        elif len(olds) > 1:
+            raise AmbiguousIdentifier
+
+        old = olds[0]
+        if old.itemtype != 'signoff':
+            raise WrongEditItemType()
+
+        old.hgdate = util.makedate()
+        old.opinion = opinion if opinion is not None else old.opinion
+        old.message = message if message is not None else old.message
+        old.style = style if style is not None else old.style
+        old._rename(self.ui, self.repo, old.identifier)
+
 class ReviewChangeset(object):
     """The review data about one changeset in the target repository.
 
@@ -338,27 +425,26 @@
     def signoffs_for_current_user(self):
         return self.signoffs_for_user(self.ui.username())
 
-    def add_signoff(self, message, opinion='', force=False):
+
+    def add_signoff(self, message, opinion='', style=''):
         """Add (and commit) a signoff for the given revision.
 
         The opinion argument should be 'yes', 'no', or ''.
 
-        If a signoff from the user already exists, a SignoffExists exception
-        will be raised unless the force argument is used.
+        If a signoff from the user already exists a SignoffExists exception
+        will be raised.
 
         """
         existing = self.signoffs_for_current_user()
 
         if existing:
-            if not force:
-                raise SignoffExists
-            existing[0]._delete(self.ui, self.repo)
+            raise SignoffExists
 
         signoff = ReviewSignoff(self.ui.username(), util.makedate(),
-            self.node, opinion, message)
+                                self.node, opinion, message, style)
         signoff._commit(self.ui, self.repo)
 
-    def add_comment(self, message, filename='', lines=[]):
+    def add_comment(self, message, filename='', lines=[], style=''):
         """Add (and commit) a comment for the given file and lines.
 
         The filename should be normalized to the format Mercurial expects,
@@ -376,7 +462,7 @@
             raise FileNotInChangeset(filename)
 
         comment = ReviewComment(self.ui.username(), util.makedate(),
-            self.node, filename, lines, message)
+            self.node, filename, lines, message, style)
         comment._commit(self.ui, self.repo)
 
 
@@ -603,12 +689,15 @@
                 lambda c: filename and c.lines, self.comments
             )
 
+
 class _ReviewObject(object):
     """A base object for some kind of review data (a signoff or comment)."""
-    def __init__(self, container, commit_message, delete_message=None):
+    def __init__(self, container, commit_message, delete_message, rename_message):
         self.container = container
         self.commit_message = commit_message
         self.delete_message = delete_message
+        self.rename_message = rename_message
+
 
     def _commit(self, ui, repo):
         """Write and commit this object to the given repo."""
@@ -630,9 +719,6 @@
     def _delete(self, ui, repo):
         """Delete and commit this object in the given repo."""
 
-        if not self.delete_message:
-            raise CannotDeleteObject
-
         data = self._render_data()
         filename = util.sha1(data).hexdigest()
         objectpath = os.path.join(repo.root, self.node, self.container, filename)
@@ -642,6 +728,34 @@
         cmdutil.commit(ui, repo, _commitfunc, [objectpath],
             { 'message': self.delete_message % self.node, 'addremove': True, })
 
+    def _rename(self, ui, repo, identifier):
+        """Commit this object in the given repo and mark it as a rename of identifier."""
+
+        data = self._render_data()
+        newidentifier = util.sha1(data).hexdigest()
+        newpath = os.path.join(repo.root, self.node, self.container, newidentifier)
+
+        oldpath = os.path.join(repo.root, self.node, self.container, identifier)
+
+        if oldpath == newpath:
+            # Nothing has changed.  This is probably from a "touch" edit made
+            # within the same second as the previous modification time.
+            return
+
+        wlock = repo.wlock(False)
+        try:
+            cmdutil.copy(ui, repo, [oldpath, newpath], {'force': True}, rename=True)
+        finally:
+            wlock.release()
+
+        with open(newpath, 'w') as objectfile:
+            objectfile.write(data)
+
+        cmdutil.commit(ui, repo, _commitfunc, [oldpath, newpath],
+            { 'message': self.rename_message % self.node })
+
+        self.identifier = newidentifier
+
 
     @property
     def local_datetime(self):
@@ -660,9 +774,11 @@
         comment.node
         comment.filename
         comment.lines
+        comment.local_datetime
         comment.message
-        comment.local_datetime
+        comment.style
         comment.identifier
+        comment.itemtype
 
     Each item is a string, except for lines, hgdate, and local_datetime.
 
@@ -676,7 +792,8 @@
     was added.
 
     """
-    def __init__(self, author, hgdate, node, filename, lines, message, identifier=None, **extra):
+    def __init__(self, author, hgdate, node, filename, lines, message,
+                 style='', identifier=None, **extra):
         """Initialize a ReviewComment.
 
         You shouldn't need to create these directly -- use a ReviewChangeset
@@ -688,16 +805,19 @@
             tip_comments = tip_review.comments
 
         """
-        super(ReviewComment, self).__init__(
-            container='comments', commit_message=messages.COMMIT_COMMENT,
-        )
+        super(ReviewComment, self).__init__(container='comments',
+            commit_message=messages.COMMIT_COMMENT,
+            delete_message=messages.DELETE_COMMENT,
+            rename_message=messages.RENAME_COMMENT)
         self.author = author
         self.hgdate = hgdate
         self.node = node
         self.filename = filename
         self.lines = lines
         self.message = message
+        self.style = style
         self.identifier = identifier
+        self.itemtype = 'comment'
 
     def _render_data(self):
         """Render the data of this comment into a string for writing to disk.
@@ -707,9 +827,9 @@
 
         """
         rendered_date = util.datestr(self.hgdate)
-        lines = ','.join(self.lines)
-        return file_templates.COMMENT_FILE_TEMPLATE % ( self.author, rendered_date,
-            self.node, self.filename, lines, self.message )
+        lines = ','.join(map(str, self.lines))
+        return files.COMMENT_FILE_TEMPLATE % ( self.author, rendered_date,
+            self.node, self.filename, lines, self.style, self.message )
 
     def __str__(self):
         """Stringify this comment for easy printing (for debugging)."""
@@ -720,6 +840,7 @@
             self.node,
             self.filename,
             self.lines,
+            self.style,
             self.message,
             '\n',
         ]))
@@ -736,8 +857,11 @@
         signoff.hgdate
         signoff.node
         signoff.opinion
+        signoff.local_datetime
         signoff.message
+        signoff.style
         signoff.identifier
+        signoff.itemtype
 
     Each item is a string, except for hgdate and local_datetime.
 
@@ -749,7 +873,8 @@
     was added.
 
     """
-    def __init__(self, author, hgdate, node, opinion, message, identifier=None, **extra):
+    def __init__(self, author, hgdate, node, opinion, message,
+                 style='', identifier=None, **extra):
         """Initialize a ReviewSignoff.
 
         You shouldn't need to create these directly -- use a ReviewChangeset
@@ -761,16 +886,18 @@
             tip_signoffs = tip_review.signoffs
 
         """
-        super(ReviewSignoff, self).__init__(
-            container='signoffs', commit_message=messages.COMMIT_SIGNOFF,
+        super(ReviewSignoff, self).__init__(container='signoffs',
+            commit_message=messages.COMMIT_SIGNOFF,
             delete_message=messages.DELETE_SIGNOFF,
-        )
+            rename_message=messages.RENAME_SIGNOFF)
         self.author = author
         self.hgdate = hgdate
         self.node = node
         self.opinion = opinion
         self.message = message
+        self.style = style
         self.identifier = identifier
+        self.itemtype = 'signoff'
 
     def _render_data(self):
         """Render the data of this signoff into a string for writing to disk.
@@ -780,7 +907,7 @@
 
         """
         rendered_date = util.datestr(self.hgdate)
-        return file_templates.SIGNOFF_FILE_TEMPLATE % ( self.author, rendered_date,
-            self.node, self.opinion, self.message )
+        return files.SIGNOFF_FILE_TEMPLATE % ( self.author, rendered_date,
+            self.node, self.opinion, self.style, self.message )
 
 
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/review/cli.py	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,424 @@
+"""The review extension's command-line UI.
+
+This module is imported in __init__.py so that Mercurial will add the
+review command to its own UI when you add the extension in ~/.hgrc.
+
+"""
+
+import api, helps, messages
+from mercurial import help, templatefilters, util
+from mercurial.node import short
+
+
+def _get_datastore(ui, repo):
+    try:
+        return api.ReviewDatastore(ui, repo)
+    except api.UninitializedDatastore:
+        raise util.Abort(messages.NO_DATA_STORE)
+
+def _get_message(ui, rd, initial):
+    message = ui.edit(initial, rd.repo.ui.username())
+    return '\n'.join(l for l in message.splitlines()
+                     if not l.startswith('HG: ')).strip()
+
+
+def _edit_comment(rd, comment, *fnames, **opts):
+    lines = opts.pop('lines')
+    message = opts.pop('message').strip()
+    mdown = opts.pop('mdown')
+
+    if len(fnames) > 1:
+        raise util.Abort(messages.EDIT_REQUIRES_ONE_OR_LESS_FILES)
+
+    if not lines:
+        lines = comment.lines
+    else:
+        lines = [int(l.strip()) for l in lines.split(',')]
+
+    if not message:
+        message = comment.message
+    
+    if not fnames:
+        fnames = [comment.filename]
+    else:
+        fnames = [api.sanitize_path(fnames[0])]
+
+    style = 'markdown' if mdown or comment.style == 'markdown' else ''
+
+    try:
+        rd.edit_comment(comment.identifier, message, fnames[0], lines, style)
+    except api.FileNotInChangeset:
+        raise util.Abort(messages.COMMENT_FILE_DOES_NOT_EXIST % (
+                             fnames[0], rd.target[comment.node].rev()))
+
+def _edit_signoff(rd, signoff, **opts):
+    mdown = opts.pop('mdown')
+    message = opts.pop('message').strip()
+
+    yes, no = opts.pop('yes'), opts.pop('no')
+    if yes and no:
+        raise util.Abort(messages.SIGNOFF_OPINION_CONFLICT)
+    opinion = 'yes' if yes else ('no' if no else signoff.opinion)
+
+    if not message:
+        message = signoff.message
+    
+    style = 'markdown' if mdown or signoff.style == 'markdown' else ''
+
+    rd.edit_signoff(signoff.identifier, message, opinion, style)
+
+
+def _web_command(ui, repo, **opts):
+    ui.note(messages.WEB_START)
+    read_only = opts.pop('read_only')
+    allow_anon = opts.pop('allow_anon')
+    address = opts.pop('address')
+    port = int(opts.pop('port'))
+    rd = _get_datastore(ui, repo)
+
+    import web
+    web.load_interface(ui, repo, read_only=read_only, allow_anon=allow_anon,
+                       address=address, port=port, open=False)
+
+def _init_command(ui, repo, **opts):
+    ui.note(messages.INIT_START)
+
+    try:
+        api.ReviewDatastore(ui, repo, rpath=opts.pop('remote_path'), create=True)
+        if '.hgreview' not in repo['tip'].manifest():
+            ui.status(messages.INIT_SUCCESS_UNCOMMITTED)
+        else:
+            ui.status(messages.INIT_SUCCESS_CLONED)
+    except api.RelativeRemotePath:
+        raise util.Abort(messages.INIT_UNSUPPORTED_RELATIVE_RPATH)
+    except api.DatastoreRequiresRemotePath:
+        raise util.Abort(messages.INIT_REQUIRES_REMOTE_PATH)
+    except api.PreexistingDatastore, e:
+        if e.committed:
+            ui.note(messages.INIT_EXISTS)
+        else:
+            raise util.Abort(messages.INIT_EXISTS_UNCOMMITTED)
+
+def _comment_command(ui, repo, *fnames, **opts):
+    rev = opts.pop('rev')
+    lines = opts.pop('lines')
+    message = opts.pop('message').strip()
+    mdown = opts.pop('mdown')
+    rd = _get_datastore(ui, repo)
+
+    rcset = rd[rev]
+
+    if lines and not len(fnames) == 1:
+        raise util.Abort(messages.COMMENT_LINES_REQUIRE_FILE)
+
+    if lines:
+        lines = lines.split(',')
+
+    fnames = map(lambda f: api.sanitize_path(f, repo), fnames) if fnames else ['']
+
+    if not message:
+        template = mdown and messages.COMMENT_EDITOR_MDOWN or messages.COMMENT_EDITOR
+        message = _get_message(ui, rd, template)
+        if not message:
+            raise util.Abort(messages.COMMENT_REQUIRES_MESSAGE)
+
+    style = mdown and 'markdown' or ''
+
+    for fn in fnames:
+        try:
+            rcset.add_comment(message=message, filename=fn, lines=lines, style=style)
+        except api.FileNotInChangeset:
+            raise util.Abort(messages.COMMENT_FILE_DOES_NOT_EXIST % (
+                                     fn, repo[rev].rev()))
+
+def _signoff_command(ui, repo, **opts):
+    message = opts.pop('message').strip()
+    mdown = opts.pop('mdown')
+    rd = _get_datastore(ui, repo)
+    rcset = rd[opts.pop('rev')]
+
+    yes, no = opts.pop('yes'), opts.pop('no')
+    if yes and no:
+        raise util.Abort(messages.SIGNOFF_OPINION_CONFLICT)
+    opinion = 'yes' if yes else ('no' if no else '')
+
+    if rcset.signoffs_for_current_user():
+        raise util.Abort(messages.SIGNOFF_EXISTS)
+
+    if not message:
+        template = mdown and messages.SIGNOFF_EDITOR_MDOWN or messages.SIGNOFF_EDITOR
+        message = _get_message(ui, rd, template)
+        if not message:
+            raise util.Abort(messages.SIGNOFF_REQUIRES_MESSAGE)
+
+    style = mdown and 'markdown' or ''
+
+    rcset.add_signoff(message=message, opinion=opinion, style=style)
+
+def _check_command(ui, repo, **opts):
+    rd = _get_datastore(ui, repo)
+    rcset = rd[opts.pop('rev')]
+
+    if opts.pop('no_nos'):
+        if any(filter(lambda s: s.opinion == "no", rcset.signoffs)):
+            raise util.Abort(messages.CHECK_HAS_NOS)
+
+    yes_count = opts.pop('yeses')
+    if yes_count:
+        yes_count = int(yes_count)
+        if len(filter(lambda s: s.opinion == "yes", rcset.signoffs)) < yes_count:
+            raise util.Abort(messages.CHECK_TOO_FEW_YESES)
+
+    if opts.pop('seen'):
+        if not rcset.signoffs and not rcset.comments:
+            raise util.Abort(messages.CHECK_UNSEEN)
+
+    ui.note(messages.CHECK_SUCCESS)
+
+def _review_command(ui, repo, *fnames, **opts):
+    rev = opts.pop('rev')
+    context = int(opts.pop('unified'))
+    rd = _get_datastore(ui, repo)
+
+    cset = repo[rev]
+    rcset = rd[rev]
+
+    comment_count = len(rcset.comments)
+    author_count = len(set(comment.author for comment in rcset.comments))
+
+    ui.write(messages.REVIEW_LOG_CSET % (cset.rev(), short(cset.node())))
+    ui.write(messages.REVIEW_LOG_AUTHOR % cset.user())
+    ui.write(messages.REVIEW_LOG_SUMMARY % cset.description().split('\n')[0])
+
+    signoffs = rcset.signoffs
+    signoffs_yes = filter(lambda s: s.opinion == 'yes', signoffs)
+    signoffs_no = filter(lambda s: s.opinion == 'no', signoffs)
+    signoffs_neutral = set(signoffs).difference(signoffs_yes + signoffs_no)
+
+    ui.write(messages.REVIEW_LOG_SIGNOFFS % (
+        len(signoffs), len(signoffs_yes), len(signoffs_no), len(signoffs_neutral))
+    )
+    ui.write(messages.REVIEW_LOG_COMMENTS % (comment_count, author_count))
+
+    def _build_item_header(item, author_template, author_extra=None):
+        author = templatefilters.person(item.author)
+        author_args = (author,)
+        if author_extra:
+            author_args = author_args + author_extra
+        author_part = author_template % author_args
+
+        age = templatefilters.age(item.hgdate)
+        age_part = messages.REVIEW_LOG_AGE % age
+        if ui.debugflag:
+            hash_part = messages.REVIEW_LOG_IDENTIFIER % item.identifier
+        elif ui.verbose:
+            hash_part = messages.REVIEW_LOG_IDENTIFIER % item.identifier[:12]
+        else:
+            hash_part = ''
+        detail_part = age_part + hash_part
+
+        spacing = 80 - (len(author_part) + len(detail_part))
+        if spacing <= 0:
+            spacing = 1
+        spacing = ' ' * spacing
+
+        return author_part + spacing + detail_part + '\n'
+
+
+    def _print_comment(comment, before='', after=''):
+        ui.write(before)
+        ui.write(_build_item_header(comment, messages.REVIEW_LOG_COMMENT_AUTHOR),
+                 label='review.comment')
+
+        for line in comment.message.splitlines():
+            ui.write(messages.REVIEW_LOG_COMMENT_LINE % line, label='review.comment')
+
+        ui.write(after)
+
+    def _print_signoff(signoff, before='', after=''):
+        ui.write(before)
+
+        opinion = signoff.opinion or 'neutral'
+        label = 'review.signoff.%s' % opinion
+        header = _build_item_header(signoff, messages.REVIEW_LOG_SIGNOFF_AUTHOR, (opinion,))
+        ui.write(header, label=label)
+
+        for line in signoff.message.splitlines():
+            ui.write(messages.REVIEW_LOG_SIGNOFF_LINE % line, label=label)
+
+        ui.write(after)
+
+
+    if rcset.signoffs:
+        ui.write('\n')
+    for signoff in rcset.signoffs:
+        _print_signoff(signoff, before='\n')
+
+    review_level_comments = rcset.review_level_comments()
+    if review_level_comments:
+        ui.write('\n')
+    for comment in review_level_comments:
+        _print_comment(comment, before='\n')
+
+    if ui.quiet:
+        return
+
+    if not fnames:
+        fnames = rcset.files()
+    fnames = [api.sanitize_path(fname, repo) for fname in fnames]
+    fnames = [fname for fname in fnames if rcset.has_diff(fname)]
+
+    for filename in fnames:
+        header = messages.REVIEW_LOG_FILE_HEADER % filename
+        print '\n\n%s %s' % (header, '-'*(80-(len(header)+1)))
+
+        for comment in rcset.file_level_comments(filename):
+            _print_comment(comment)
+
+        annotated_diff = rcset.annotated_diff(filename, context)
+        prefix = '%%%dd: ' % len(str(annotated_diff.next()))
+
+        for line in annotated_diff:
+            if line['skipped']:
+                ui.write(messages.REVIEW_LOG_SKIPPED % line['skipped'])
+                for comment in line['comments']:
+                    _print_comment(comment)
+                continue
+            
+            ui.write('%s ' % (prefix % line['number']))
+            if line['content'].startswith('+'):
+                ui.write('%s\n' % line['content'], label='diff.inserted')
+            elif line['content'].startswith('-'):
+                ui.write('%s\n' % line['content'], label='diff.deleted')
+            else:
+                ui.write('%s\n' % line['content'])
+
+            for comment in line['comments']:
+                _print_comment(comment)
+
+def _delete_command(ui, repo, *identifiers, **opts):
+    # TODO: require -f to delete some else's item
+    force = opts.pop('force')
+    rd = _get_datastore(ui, repo)
+    
+    if not identifiers:
+        raise util.Abort(messages.REQUIRES_IDS)
+
+    for i in identifiers:
+        try:
+            rd.remove_item(i)
+        except api.UnknownIdentifier:
+            raise util.Abort(messages.UNKNOWN_ID % i)
+        except api.AmbiguousIdentifier:
+            raise util.Abort(messages.AMBIGUOUS_ID % i)
+
+def _edit_command(ui, repo, *args, **opts):
+    # TODO: require -f to edit some else's item
+    # TODO: support forcing of plain text
+    force = opts.pop('force')
+    identifier = opts.pop('edit')
+    rd = _get_datastore(ui, repo)
+
+    items = rd.get_items(identifier)
+    if len(items) == 0:
+        raise util.Abort(messages.UNKNOWN_ID % identifier)
+    elif len(items) > 1:
+        raise util.Abort(messages.AMBIGUOUS_ID % identifier)
+    item = items[0]
+
+    if item.itemtype == 'comment':
+        _edit_comment(rd, item, *args, **opts)
+    elif item.itemtype == 'signoff':
+        _edit_signoff(rd, item, **opts)
+
+
+def review(ui, repo, *args, **opts):
+    """code review changesets in the current repository
+
+    To start using the review extension with a repository, you need to
+    initialize the code review data::
+
+        hg help review-init
+
+    Once you've initialized it (and cloned the review data repo to a place
+    where others can get to it) you can start reviewing changesets.
+
+    See the following help topics if you want to use the command-line
+    interface:
+
+    - hg help review-review
+    - hg help review-comment
+    - hg help review-signoff
+    - hg help review-check
+
+    Once you've reviewed some changesets don't forget to push your comments and
+    signoffs so other people can view them.
+
+    """
+    if opts.pop('web'):
+        return _web_command(ui, repo, **opts)
+    elif opts.pop('init'):
+        return _init_command(ui, repo, **opts)
+    elif opts.pop('comment'):
+        return _comment_command(ui, repo, *args, **opts)
+    elif opts.pop('signoff'):
+        return _signoff_command(ui, repo, **opts)
+    elif opts.pop('check'):
+        return _check_command(ui, repo, **opts)
+    elif opts.get('edit'):
+        return _edit_command(ui, repo, *args, **opts)
+    elif opts.pop('delete'):
+        return _delete_command(ui, repo, *args, **opts)
+    else:
+        return _review_command(ui, repo, *args, **opts)
+
+
+cmdtable = {
+    'review': (review, [
+        ('U', 'unified',     '5',   'number of lines of context to show'),
+        ('m', 'message',     '',    'use <text> as the comment or signoff message'),
+        ('',  'mdown',       False, 'use Markdown to format the comment or signoff message'),
+        ('r', 'rev',         '.',   'the revision to review'),
+
+        ('d', 'delete',      False, 'delete a comment or signoff'),
+        ('',  'edit',        '',    'edit a comment or signoff'),
+
+        ('',  'check',       False, 'check the review status of the given revision'),
+        ('',  'no-nos',      False, 'ensure this revision does NOT have signoffs of "no"'),
+        ('',  'yeses',       '',    'ensure this revision has at least NUM signoffs of "yes"'),
+        ('',  'seen',        False, 'ensure this revision has a comment or signoff'),
+
+        ('i', 'init',        False, 'start code reviewing this repository'),
+        ('',  'remote-path', '',    'the remote path to code review data'),
+
+        ('c', 'comment',     False, 'add a comment'),
+        ('l', 'lines',       '',    'the line(s) of the file to comment on'),
+
+        ('s', 'signoff',     False, 'sign off'),
+        ('',  'yes',         False, 'sign off as stating the changeset is good'),
+        ('',  'no',          False, 'sign off as stating the changeset is bad'),
+        ('f', 'force',       False, 'force an action'),
+
+        ('w', 'web',         False,       'launch the web interface'),
+        ('',  'read-only',   False,       'make the web interface read-only'),
+        ('',  'allow-anon',  False,       'allow anonymous comments on the web interface'),
+        ('',  'address',     '127.0.0.1', 'run the web interface on the specified address'),
+        ('',  'port',        '8080',      'run the web interface on the specified port'),
+    ],
+    'hg review')
+}
+
+help.helptable += (
+    (['review-init'], ('Initializing code review for a repository'), (helps.INIT)),
+    (['review-review'], ('Viewing code review data for changesets'), (helps.REVIEW)),
+    (['review-comment'], ('Adding code review comments for changesets'), (helps.COMMENT)),
+    (['review-signoff'], ('Adding code review signoffs for changesets'), (helps.SIGNOFF)),
+    (['review-check'], ('Checking the review status of changesets'), (helps.CHECK)),
+)
+
+colortable = {
+    'review.comment': 'cyan',
+    'review.signoff.yes': 'green',
+    'review.signoff.neutral': 'cyan',
+    'review.signoff.no': 'red',
+}
--- a/review/extension_ui.py	Tue Jun 15 20:30:23 2010 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,367 +0,0 @@
-"""The review extension's command-line UI.
-
-This module is imported in __init__.py so that Mercurial will add the
-review command to its own UI when you add the extension in ~/.hgrc.
-
-"""
-
-import re
-import api, helps, messages
-from mercurial import help, templatefilters, util
-from mercurial.node import short
-from mercurial import extensions
-
-
-def _get_message(ui, rd, initial):
-    message = ui.edit(initial, rd.repo.ui.username())
-    return '\n'.join(l for l in message.splitlines()
-                     if not l.startswith('HG: ')).strip()
-
-
-def _web_command(ui, repo, **opts):
-    ui.note(messages.WEB_START)
-    read_only = opts.pop('read_only')
-    allow_anon = opts.pop('allow_anon')
-    address = opts.pop('address')
-    port = int(opts.pop('port'))
-
-    import web_ui
-    web_ui.load_interface(ui, repo, read_only=read_only, allow_anon=allow_anon,
-                          address=address, port=port, open=False)
-
-def _init_command(ui, repo, **opts):
-    ui.note(messages.INIT_START)
-
-    try:
-        api.ReviewDatastore(ui, repo, rpath=opts.pop('remote_path'), create=True)
-        if '.hgreview' not in repo['tip'].manifest():
-            ui.status(messages.INIT_SUCCESS_UNCOMMITTED)
-        else:
-            ui.status(messages.INIT_SUCCESS_CLONED)
-    except api.RelativeRemotePath:
-        raise util.Abort(messages.INIT_UNSUPPORTED_RELATIVE_RPATH)
-    except api.DatastoreRequiresRemotePath:
-        raise util.Abort(messages.INIT_REQUIRES_REMOTE_PATH)
-    except api.PreexistingDatastore, e:
-        if e.committed:
-            ui.note(messages.INIT_EXISTS)
-        else:
-            raise util.Abort(messages.INIT_EXISTS_UNCOMMITTED)
-
-def _comment_command(ui, repo, *fnames, **opts):
-    rev = opts.pop('rev')
-    lines = opts.pop('lines')
-    message = opts.pop('message').strip()
-
-    rd = api.ReviewDatastore(ui, repo)
-    rcset = rd[rev]
-
-    if lines and not len(fnames) == 1:
-        raise util.Abort(messages.COMMENT_LINES_REQUIRE_FILE)
-
-    if lines:
-        lines = lines.split(',')
-
-    fnames = map(lambda f: api.sanitize_path(f, repo), fnames) if fnames else ['']
-
-    if not message:
-        message = _get_message(ui, rd, messages.COMMENT_EDITOR_MESSAGE)
-        if not message:
-            raise util.Abort(messages.COMMENT_REQUIRES_MESSAGE)
-
-    for fn in fnames:
-        try:
-            rcset.add_comment(message=message, filename=fn, lines=lines)
-        except api.FileNotInChangeset:
-            raise util.Abort(messages.COMMENT_FILE_DOES_NOT_EXIST % (
-                                     fn, repo[rev].rev()))
-
-def _signoff_command(ui, repo, **opts):
-    rd = api.ReviewDatastore(ui, repo)
-    rcset = rd[opts.pop('rev')]
-    message = opts.pop('message').strip()
-    force = opts.pop('force')
-
-    yes, no = opts.pop('yes'), opts.pop('no')
-    if yes and no:
-        raise util.Abort(messages.SIGNOFF_OPINION_CONFLICT)
-    opinion = 'yes' if yes else ('no' if no else '')
-
-    if rcset.signoffs_for_current_user() and not force:
-        raise util.Abort(messages.SIGNOFF_EXISTS)
-
-    if not message:
-        message = _get_message(ui, rd, messages.SIGNOFF_EDITOR_MESSAGE)
-        if not message:
-            raise util.Abort(messages.SIGNOFF_REQUIRES_MESSAGE)
-
-    rcset.add_signoff(message=message, opinion=opinion, force=force)
-
-def _check_command(ui, repo, **opts):
-    rd = api.ReviewDatastore(ui, repo)
-    rcset = rd[opts.pop('rev')]
-
-    if opts.pop('no_nos'):
-        if any(filter(lambda s: s.opinion == "no", rcset.signoffs)):
-            raise util.Abort(messages.CHECK_HAS_NOS)
-
-    yes_count = opts.pop('yeses')
-    if yes_count:
-        yes_count = int(yes_count)
-        if len(filter(lambda s: s.opinion == "yes", rcset.signoffs)) < yes_count:
-            raise util.Abort(messages.CHECK_TOO_FEW_YESES)
-
-    if opts.pop('seen'):
-        if not rcset.signoffs and not rcset.comments:
-            raise util.Abort(messages.CHECK_UNSEEN)
-
-    ui.note(messages.CHECK_SUCCESS)
-
-def _review_command(ui, repo, *fnames, **opts):
-    rev = opts.pop('rev')
-    context = int(opts.pop('unified'))
-
-    try:
-        rd = api.ReviewDatastore(ui, repo)
-    except api.UninitializedDatastore:
-        raise util.Abort(messages.NO_DATA_STORE)
-    cset = repo[rev]
-    rcset = rd[rev]
-
-    comment_count = len(rcset.comments)
-    author_count = len(set(comment.author for comment in rcset.comments))
-
-    ui.write(messages.REVIEW_LOG_CSET % (cset.rev(), short(cset.node())))
-    ui.write(messages.REVIEW_LOG_AUTHOR % cset.user())
-    ui.write(messages.REVIEW_LOG_SUMMARY % cset.description().split('\n')[0])
-
-    signoffs = rcset.signoffs
-    signoffs_yes = filter(lambda s: s.opinion == 'yes', signoffs)
-    signoffs_no = filter(lambda s: s.opinion == 'no', signoffs)
-    signoffs_neutral = set(signoffs).difference(signoffs_yes + signoffs_no)
-
-    ui.write(messages.REVIEW_LOG_SIGNOFFS % (
-        len(signoffs), len(signoffs_yes), len(signoffs_no), len(signoffs_neutral))
-    )
-    ui.write(messages.REVIEW_LOG_COMMENTS % (comment_count, author_count))
-
-    def _build_item_header(item, author_template, author_extra=None):
-        author = templatefilters.person(item.author)
-        author_args = (author,)
-        if author_extra:
-            author_args = author_args + author_extra
-        author_part = author_template % author_args
-
-        age = templatefilters.age(item.hgdate)
-        age_part = messages.REVIEW_LOG_AGE % age
-        if ui.debugflag:
-            hash_part = messages.REVIEW_LOG_IDENTIFIER % item.identifier
-        elif ui.verbose:
-            hash_part = messages.REVIEW_LOG_IDENTIFIER % item.identifier[:12]
-        else:
-            hash_part = ''
-        detail_part = age_part + hash_part
-
-        spacing = 80 - (len(author_part) + len(detail_part))
-        if spacing <= 0:
-            spacing = 1
-        spacing = ' ' * spacing
-
-        return author_part + spacing + detail_part + '\n'
-
-
-    def _print_comment(comment, before='', after=''):
-        ui.write(before)
-        ui.write(_build_item_header(comment, messages.REVIEW_LOG_COMMENT_AUTHOR))
-
-        for line in comment.message.splitlines():
-            ui.write(messages.REVIEW_LOG_COMMENT_LINE % line)
-
-        ui.write(after)
-
-    def _print_signoff(signoff, before='', after=''):
-        ui.write(before)
-
-        opinion = signoff.opinion or 'neutral'
-        ui.write(_build_item_header(signoff, messages.REVIEW_LOG_SIGNOFF_AUTHOR, (opinion,)))
-
-        for line in signoff.message.splitlines():
-            ui.write(messages.REVIEW_LOG_SIGNOFF_LINE % line)
-
-        ui.write(after)
-
-
-    if rcset.signoffs:
-        ui.write('\n')
-    for signoff in rcset.signoffs:
-        _print_signoff(signoff, before='\n')
-
-    review_level_comments = rcset.review_level_comments()
-    if review_level_comments:
-        ui.write('\n')
-    for comment in review_level_comments:
-        _print_comment(comment, before='\n')
-
-    if ui.quiet:
-        return
-
-    if not fnames:
-        fnames = rcset.files()
-    fnames = [api.sanitize_path(fname, repo) for fname in fnames]
-    fnames = [fname for fname in fnames if rcset.has_diff(fname)]
-
-    for filename in fnames:
-        header = messages.REVIEW_LOG_FILE_HEADER % filename
-        print '\n\n%s %s' % (header, '-'*(80-(len(header)+1)))
-
-        for comment in rcset.file_level_comments(filename):
-            _print_comment(comment)
-
-        annotated_diff = rcset.annotated_diff(filename, context)
-        prefix = '%%%dd: ' % len(str(annotated_diff.next()))
-
-        for line in annotated_diff:
-            if line['skipped']:
-                ui.write(messages.REVIEW_LOG_SKIPPED % line['skipped'])
-                for comment in line['comments']:
-                    _print_comment(comment)
-                continue
-
-            ui.write('%s %s\n' % (prefix % line['number'], line['content']))
-
-            for comment in line['comments']:
-                _print_comment(comment)
-
-
-_review_effects = {
-    'deleted': ['red'],
-    'inserted': ['green'],
-    'comments': ['cyan'],
-    'signoffs': ['yellow'],
-}
-_review_re = [
-    (re.compile(r'^(?P<rest> *\d+:  )(?P<colorized>[-].*)'), 'deleted'),
-    (re.compile(r'^(?P<rest> *\d+:  )(?P<colorized>[+].*)'), 'inserted'),
-    (re.compile(r'^(?P<colorized>#.*)'), 'comments'),
-    (re.compile(r'^(?P<colorized>\$.*)'), 'signoffs'),
-]
-
-def colorwrap(orig, *args):
-    '''wrap ui.write for colored diff output'''
-    def _colorize(s):
-        lines = s.split('\n')
-        for i, line in enumerate(lines):
-            if not line:
-                continue
-            else:
-                for r, style in _review_re:
-                    m = r.match(line)
-                    if m:
-                        lines[i] = "%s%s" % (
-                            m.groupdict().get('rest', ''),
-                            render_effects(m.group('colorized'), _review_effects[style]))
-                        break
-        return '\n'.join(lines)
-
-    orig(*[_colorize(s) for s in args])
-
-def colorreview(orig, ui, repo, *fnames, **opts):
-    '''colorize review command output'''
-    oldwrite = extensions.wrapfunction(ui, 'write', colorwrap)
-    try:
-        orig(ui, repo, *fnames, **opts)
-    finally:
-        ui.write = oldwrite
-
-
-_ui = None
-def uisetup(ui):
-    global _ui
-    _ui = ui
-
-def extsetup():
-    try:
-        color = extensions.find('color')
-        color._setupcmd(_ui, 'review', cmdtable, colorreview,
-                       _review_effects)
-        global render_effects
-        render_effects = color.render_effects
-    except KeyError:
-        pass
-
-
-def review(ui, repo, *fnames, **opts):
-    """code review changesets in the current repository
-
-    To start using the review extension with a repository, you need to
-    initialize the code review data::
-
-        hg help review-init
-
-    Once you've initialized it (and cloned the review data repo to a place
-    where others can get to it) you can start reviewing changesets.
-
-    See the following help topics if you want to use the command-line
-    interface:
-
-    - hg help review-review
-    - hg help review-comment
-    - hg help review-signoff
-    - hg help review-check
-
-    Once you've reviewed some changesets don't forget to push your comments and
-    signoffs so other people can view them.
-
-    """
-    if opts.pop('web'):
-        return _web_command(ui, repo, **opts)
-    elif opts.pop('init'):
-        return _init_command(ui, repo, **opts)
-    elif opts.pop('comment'):
-        return _comment_command(ui, repo, *fnames, **opts)
-    elif opts.pop('signoff'):
-        return _signoff_command(ui, repo, **opts)
-    elif opts.pop('check'):
-        return _check_command(ui, repo, **opts)
-    else:
-        return _review_command(ui, repo, *fnames, **opts)
-
-
-cmdtable = {
-    'review': (review, [
-        ('U', 'unified',     '5',   'number of lines of context to show'),
-        ('m', 'message',     '',    'use <text> as the comment or signoff message'),
-        ('r', 'rev',         '.',   'the revision to review'),
-
-        ('',  'check',       False, 'check the review status of the given revision'),
-        ('',  'no-nos',      False, 'ensure this revision does NOT have signoffs of "no"'),
-        ('',  'yeses',       '',    'ensure this revision has at least NUM signoffs of "yes"'),
-        ('',  'seen',        False, 'ensure this revision has a comment or signoff'),
-
-        ('i', 'init',        False, 'start code reviewing this repository'),
-        ('',  'remote-path', '',    'the remote path to code review data'),
-
-        ('c', 'comment',     False, 'add a comment'),
-        ('l', 'lines',       '',    'the line(s) of the file to comment on'),
-
-        ('s', 'signoff',     False, 'sign off'),
-        ('',  'yes',         False, 'sign off as stating the changeset is good'),
-        ('',  'no',          False, 'sign off as stating the changeset is bad'),
-        ('f', 'force',       False, 'overwrite an existing signoff'),
-
-        ('w', 'web',         False,       'launch the web interface'),
-        ('',  'read-only',   False,       'make the web interface read-only'),
-        ('',  'allow-anon',  False,       'allow anonymous comments on the web interface'),
-        ('',  'address',     '127.0.0.1', 'run the web interface on the specified address'),
-        ('',  'port',        '8080',      'run the web interface on the specified port'),
-    ],
-    'hg review')
-}
-
-help.helptable += (
-    (['review-init'], ('Initializing code review for a repository'), (helps.INIT)),
-    (['review-review'], ('Viewing code review data for changesets'), (helps.REVIEW)),
-    (['review-comment'], ('Adding code review comments for changesets'), (helps.COMMENT)),
-    (['review-signoff'], ('Adding code review signoffs for changesets'), (helps.SIGNOFF)),
-    (['review-check'], ('Checking the review status of changesets'), (helps.CHECK)),
-)
--- a/review/file_templates.py	Tue Jun 15 20:30:23 2010 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,18 +0,0 @@
-"""Templates for hg-review's data files."""
-
-COMMENT_FILE_TEMPLATE = """\
-author:%s
-hgdate:%s
-node:%s
-filename:%s
-lines:%s
-
-%s"""
-
-SIGNOFF_FILE_TEMPLATE = """\
-author:%s
-hgdate:%s
-node:%s
-opinion:%s
-
-%s"""
\ No newline at end of file
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/review/files.py	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,20 @@
+"""Templates for hg-review's data files."""
+
+COMMENT_FILE_TEMPLATE = """\
+author:%s
+hgdate:%s
+node:%s
+filename:%s
+lines:%s
+style:%s
+
+%s"""
+
+SIGNOFF_FILE_TEMPLATE = """\
+author:%s
+hgdate:%s
+node:%s
+opinion:%s
+style:%s
+
+%s"""
--- a/review/helps.py	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/helps.py	Thu Jul 01 19:32:49 2010 -0400
@@ -54,7 +54,7 @@
 """
 
 COMMENT = r"""
-USAGE: hg review --comment -m MESSAGE [-r REV] [-l LINES] [FILE]
+USAGE: hg review --comment [-m MESSAGE] [--mdown] [-r REV] [-l LINES] [FILE]
 
 If no revision is given the current parent of the working directory will be
 used.
@@ -68,6 +68,8 @@
 specific lines.  LINES should be a comma-separated list of line numbers (as
 numbered in the output of ``hg review``), such as ``3`` or ``2,3``.
 
+If ``--mdown`` is used the comment text will be interpreted as Markdown.
+
 Examples::
 
     hg review --comment -m 'This changeset needs to go in branch X.'
@@ -77,7 +79,7 @@
 """
 
 SIGNOFF = r"""
-USAGE: hg review --signoff -m MESSAGE [--yes | --no] [-r REV] [--force]
+USAGE: hg review --signoff [-m MESSAGE] [--mdown] [--yes | --no] [-r REV]
 
 If no revision is given the current parent of the working directory will be
 used.
@@ -87,15 +89,13 @@
 individual project to decide exactly what that means.  If neither option is
 given the signoff will be marked as "neutral".
 
-If you've already signed off on a changeset you can use ``--force`` to replace
-your previous signoff with a new one.
+If ``--mdown`` is used the signoff message text will be interpreted as Markdown.
 
 Examples::
 
     hg review --signoff -m 'I do not work on this part of the code.'
     hg review --signoff --yes -m 'Thanks, this change looks good.'
     hg review --signoff --no -m 'This would break backwards compatibility!'
-    hg review --signoff --yes --force -m 'Nevermind, this is fine.'
 
 """
 
--- a/review/messages.py	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/messages.py	Thu Jul 01 19:32:49 2010 -0400
@@ -1,8 +1,8 @@
 """Messages used by the command-line UI of hg-review.
 
-These are kept in a separate module to avoid repeating them over and over
-in the extension_ui module, and to make checking for proper output in the
-unit tests much easier.
+These are kept in a separate module to avoid repeating them over and over in
+the cli module, and to make checking for proper output in the unit tests much
+easier.
 
 """
 NO_DATA_STORE = """\
@@ -52,13 +52,21 @@
 you must give a filename to comment on specific lines!
 """
 
-COMMENT_EDITOR_MESSAGE = """\
+COMMENT_EDITOR = """\
 
 
 HG: Enter your comment. Lines beginning with 'HG:' are removed.
 HG: Leave comment empty to abort comment.\
 """
 
+COMMENT_EDITOR_MDOWN = """\
+
+
+HG: Enter your comment. Lines beginning with 'HG:' are removed.
+HG: This comment will be formatted with Markdown.
+HG: Leave comment empty to abort comment.\
+"""
+
 SIGNOFF_REQUIRES_MESSAGE = """\
 empty signoff message
 """
@@ -68,16 +76,24 @@
 """
 
 SIGNOFF_EXISTS = """\
-you have already signed off on this changeset (use -f to replace)!
+you have already signed off on this changeset (use "hg review --edit" to modify)!
 """
 
-SIGNOFF_EDITOR_MESSAGE = """\
+SIGNOFF_EDITOR = """\
 
 
 HG: Enter your signoff message. Lines beginning with 'HG:' are removed.
 HG: Leave message empty to abort signoff.\
 """
 
+SIGNOFF_EDITOR_MDOWN = """\
+
+
+HG: Enter your signoff message. Lines beginning with 'HG:' are removed.
+HG: This message will be formatted with Markdown.
+HG: Leave message empty to abort signoff.\
+"""
+
 REVIEW_LOG_CSET = """\
 changeset: %d:%s
 """
@@ -120,8 +136,14 @@
 """
 
 COMMIT_COMMENT = """Add a comment on changeset %s"""
+DELETE_COMMENT = """Remove comment off from changeset %s"""
+RENAME_COMMENT = """Rename comment on changeset %s"""
+
 COMMIT_SIGNOFF = """Sign off on changeset %s"""
-DELETE_SIGNOFF = """Remove sign off on changeset %s"""
+DELETE_SIGNOFF = """Remove signoff on changeset %s"""
+RENAME_SIGNOFF = """Rename signoff on changeset %s"""
+
+FETCH = """Automated merge of review data."""
 
 WEB_START = """\
 starting web server
@@ -142,3 +164,24 @@
 CHECK_UNSEEN = """\
 changeset has no comments or signoffs
 """
+
+AMBIGUOUS_ID = """\
+the identifier '%s' matches more than one item!
+"""
+
+UNKNOWN_ID = """\
+unknown item '%s'!
+"""
+
+REQUIRES_IDS = """\
+no items specified
+"""
+
+EDIT_REQUIRES_SINGLE_ID = """\
+cannot edit multiple items
+"""
+
+EDIT_REQUIRES_ONE_OR_LESS_FILES = """\
+cannot edit a comment to be on multiple files
+"""
+
--- a/review/static/colorbox/jquery.colorbox.js	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/static/colorbox/jquery.colorbox.js	Thu Jul 01 19:32:49 2010 -0400
@@ -78,7 +78,7 @@
 		rel: FALSE,
 		opacity: 0.9,
 		preloading: TRUE,
-		current: "image {current} of {total}",
+		current: "{current} of {total}",
 		previous: "previous",
 		next: "next",
 		close: "close",
--- a/review/static/comments.js	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/static/comments.js	Thu Jul 01 19:32:49 2010 -0400
@@ -10,6 +10,11 @@
                                       name="new-comment-body"></textarea>\
                         </div>\
                         \
+                        <div class="field cuddly">\
+                            <input type="checkbox" class="checkbox" name="comment-markdown" id="id_comment-line-form_' + currNum + '_markdown" checked="checked" />\
+                            <label for="id_comment-line-form_' + currNum + '_markdown">Use Markdown to format this comment.</label>\
+                        </div>\
+                        \
                         <a class="submit button"><span>Post Comment</span></a>\
                         <a class="cancel-line button"><span>Cancel</span></a>\
                         \
--- a/review/static/extra.css	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/static/extra.css	Thu Jul 01 19:32:49 2010 -0400
@@ -4,14 +4,29 @@
         left bottom,
         left top,
         color-stop(0.00, rgba(0, 0, 0, 0.15)),
-        color-stop(1, rgba(255, 255, 255, 0.0))
+        color-stop(1, rgba(0, 0, 0, 0.0))
     );
     background: -moz-linear-gradient(
         center bottom,
         rgba(0, 0, 0, 0.15) 0%,
-        rgba(255, 255, 255, 0.0) 100%
+        rgba(0, 0, 0, 0.0) 100%
     );
 }
+a.button:active span {
+    background: -webkit-gradient(
+        linear,
+        left bottom,
+        left top,
+        color-stop(0.00, rgba(0, 0, 0, 0.1)),
+        color-stop(1, rgba(0, 0, 0, 0.0))
+    );
+    background: -moz-linear-gradient(
+        center bottom,
+        rgba(0, 0, 0, 0.1) 0%,
+        rgba(0, 0, 0, 0.0) 100%
+    );
+}
+
 
 #index .content table tr:nth-child(even) td.node {
     background: -webkit-gradient(
--- a/review/static/style.css	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/static/style.css	Thu Jul 01 19:32:49 2010 -0400
@@ -1,3 +1,11 @@
+.markdown p:last-child, .markdown ol:last-child, .markdown ul:last-child {
+  margin-bottom: 0;
+}
+.plain {
+  font-family: Monaco, Consolas, "Courier New", monospace;
+  font-size: 12px;
+  white-space: pre;
+}
 .group:after {
   clear: both;
   content: ' ';
@@ -87,19 +95,15 @@
   border-left: 1px solid #a9a883;
   border-bottom: 1px solid #989772;
 }
-body .header .remotes form a:active {
-  margin-top: 1px;
-  margin-bottom: -1px;
-}
 body .header .remotes form a:focus {
-  box-shadow: 0px 0px 3px rgba(100, 100, 200, 0.9);
-  -moz-box-shadow: 0px 0px 3px rgba(100, 100, 200, 0.9);
-  -webkit-box-shadow: 0px 0px 3px rgba(100, 100, 200, 0.9);
+  box-shadow: 0px 0px 4px rgba(100, 100, 100, 0.9);
+  -moz-box-shadow: 0px 0px 4px rgba(100, 100, 100, 0.9);
+  -webkit-box-shadow: 0px 0px 4px rgba(100, 100, 100, 0.9);
 }
 body .header .remotes form a span {
   display: inline-block;
   padding: 0 6px;
-  text-shadow: 0px 1px 1px #fefdd8;
+  text-shadow: 0px 1px 1px #e5e4bf;
   -webkit-border-radius: 3px;
   -moz-border-radius: 3px;
   border-radius: 3px;
@@ -109,10 +113,10 @@
   border-right: 1px solid #c8c695;
   border-left: 1px solid #c8c695;
   border-bottom: 1px solid #aead83;
-  background-color: #fefdd8;
+  background-color: #edecc7;
 }
 body .header .remotes form a:hover span {
-  background-color: #fefdd8;
+  background-color: #edecc7;
 }
 body .content {
   border-top: 1px solid #f8f7e8;
@@ -184,6 +188,9 @@
   border-radius: 2px;
   border: 1px solid #444;
 }
+form .field.cuddly {
+  margin-top: -13px;
+}
 #index .content table {
   width: 100%;
 }
@@ -276,19 +283,15 @@
   border-left: 1px solid #a6a6a6;
   border-bottom: 1px solid #959595;
 }
-#changeset .content a.submit:active {
-  margin-top: 1px;
-  margin-bottom: -1px;
-}
 #changeset .content a.submit:focus {
-  box-shadow: 0px 0px 3px rgba(100, 100, 200, 0.9);
-  -moz-box-shadow: 0px 0px 3px rgba(100, 100, 200, 0.9);
-  -webkit-box-shadow: 0px 0px 3px rgba(100, 100, 200, 0.9);
+  box-shadow: 0px 0px 4px rgba(100, 100, 100, 0.9);
+  -moz-box-shadow: 0px 0px 4px rgba(100, 100, 100, 0.9);
+  -webkit-box-shadow: 0px 0px 4px rgba(100, 100, 100, 0.9);
 }
 #changeset .content a.submit span {
   display: inline-block;
   padding: 0 6px;
-  text-shadow: 0px 1px 1px #fbfbfb;
+  text-shadow: 0px 1px 1px #e2e2e2;
   -webkit-border-radius: 3px;
   -moz-border-radius: 3px;
   border-radius: 3px;
@@ -298,10 +301,10 @@
   border-right: 1px solid #bbbbbb;
   border-left: 1px solid #bbbbbb;
   border-bottom: 1px solid #a4a4a4;
-  background-color: #fbfbfb;
+  background-color: #eaeaea;
 }
 #changeset .content a.submit:hover span {
-  background-color: #fbfbfb;
+  background-color: #eaeaea;
 }
 #changeset .content a.cancel, #changeset .content a.cancel-line {
   cursor: pointer;
@@ -323,19 +326,15 @@
   border-left: 1px solid #a6a6a6;
   border-bottom: 1px solid #959595;
 }
-#changeset .content a.cancel:active, #changeset .content a.cancel-line:active {
-  margin-top: 1px;
-  margin-bottom: -1px;
-}
 #changeset .content a.cancel:focus, #changeset .content a.cancel-line:focus {
-  box-shadow: 0px 0px 3px rgba(100, 100, 200, 0.9);
-  -moz-box-shadow: 0px 0px 3px rgba(100, 100, 200, 0.9);
-  -webkit-box-shadow: 0px 0px 3px rgba(100, 100, 200, 0.9);
+  box-shadow: 0px 0px 4px rgba(100, 100, 100, 0.9);
+  -moz-box-shadow: 0px 0px 4px rgba(100, 100, 100, 0.9);
+  -webkit-box-shadow: 0px 0px 4px rgba(100, 100, 100, 0.9);
 }
 #changeset .content a.cancel span, #changeset .content a.cancel-line span {
   display: inline-block;
   padding: 0 6px;
-  text-shadow: 0px 1px 1px #fbfbfb;
+  text-shadow: 0px 1px 1px #e2e2e2;
   -webkit-border-radius: 3px;
   -moz-border-radius: 3px;
   border-radius: 3px;
@@ -345,10 +344,10 @@
   border-right: 1px solid #bbbbbb;
   border-left: 1px solid #bbbbbb;
   border-bottom: 1px solid #a4a4a4;
-  background-color: #fbfbfb;
+  background-color: #eaeaea;
 }
 #changeset .content a.cancel:hover span, #changeset .content a.cancel-line:hover span {
-  background-color: #fbfbfb;
+  background-color: #eaeaea;
 }
 #changeset .content .navigation .middle {
   display: inline-block;
@@ -428,19 +427,15 @@
   border-left: 1px solid #a6a6a6;
   border-bottom: 1px solid #959595;
 }
-#changeset .content .activate a:active {
-  margin-top: 1px;
-  margin-bottom: -1px;
-}
 #changeset .content .activate a:focus {
-  box-shadow: 0px 0px 3px rgba(100, 100, 200, 0.9);
-  -moz-box-shadow: 0px 0px 3px rgba(100, 100, 200, 0.9);
-  -webkit-box-shadow: 0px 0px 3px rgba(100, 100, 200, 0.9);
+  box-shadow: 0px 0px 4px rgba(100, 100, 100, 0.9);
+  -moz-box-shadow: 0px 0px 4px rgba(100, 100, 100, 0.9);
+  -webkit-box-shadow: 0px 0px 4px rgba(100, 100, 100, 0.9);
 }
 #changeset .content .activate a span {
   display: inline-block;
   padding: 0 6px;
-  text-shadow: 0px 1px 1px #fbfbfb;
+  text-shadow: 0px 1px 1px #e2e2e2;
   -webkit-border-radius: 3px;
   -moz-border-radius: 3px;
   border-radius: 3px;
@@ -450,10 +445,10 @@
   border-right: 1px solid #bbbbbb;
   border-left: 1px solid #bbbbbb;
   border-bottom: 1px solid #a4a4a4;
-  background-color: #fbfbfb;
+  background-color: #eaeaea;
 }
 #changeset .content .activate a:hover span {
-  background-color: #fbfbfb;
+  background-color: #eaeaea;
 }
 #changeset .content .togglebox form {
   float: left;
@@ -477,10 +472,11 @@
   margin-bottom: 14px;
 }
 #changeset .content .item-listing .comment, #changeset .content .item-listing .signoff {
-  padding: 8px 10px;
+  padding: 8px 12px 8px 10px;
   border-top: 1px solid #fff;
   border-bottom: 1px solid #ddd;
   position: relative;
+  min-height: 41px;
 }
 #changeset .content .item-listing .comment:first-child, #changeset .content .item-listing .signoff:first-child {
   border-top: none;
@@ -492,25 +488,30 @@
   float: right;
 }
 #changeset .content .item-listing .comment .message, #changeset .content .item-listing .signoff .message {
-  font-family: Monaco, Consolas, "Courier New", monospace;
-  font-size: 12px;
   width: 690px;
   padding-top: 3px;
-  white-space: pre;
-  overflow-x: auto;
 }
 #changeset .content .item-listing .comment .avatar img, #changeset .content .item-listing .signoff .avatar img {
   height: 30px;
   width: 30px;
   margin-top: 5px;
+  -webkit-border-radius: 3px;
+  -moz-border-radius: 3px;
+  border-radius: 3px;
 }
 #changeset .content .item-listing .comment .expand, #changeset .content .item-listing .signoff .expand {
   position: absolute;
-  top: -4px;
-  right: 1px;
-  font-size: 16px;
+  top: 17px;
+  right: -18px;
+  font-size: 14px;
   font-weight: bold;
 }
+#changeset .content .item-listing .comment .expand:hover, #changeset .content .item-listing .signoff .expand:hover {
+  text-decoration: none;
+}
+#changeset .content .item-listing .comment .colorboxed, #changeset .content .item-listing .signoff .colorboxed {
+  display: none;
+}
 #changeset .content .item-listing .signoff .signoff-opinion {
   float: right;
   font: bold 30px/1 "Helvetica Neue", HelveticaNeue, Arial, Helvetica, sans-serif;
@@ -643,19 +644,39 @@
 #changeset .content .diff table td.comment .avatar img {
   height: 30px;
   width: 30px;
-}
-#changeset .content .diff table td.comment .message {
-  white-space: pre;
-  font-family: Monaco, Consolas, "Courier New", monospace;
+  -webkit-border-radius: 3px;
+  -moz-border-radius: 3px;
+  border-radius: 3px;
 }
 #changeset .content .diff table td.comment .author {
   padding-bottom: 3px;
 }
-#colorbox .expand {
+#changeset .content .diff table td.comment .comment-content {
+  position: relative;
+}
+#changeset .content .diff table td.comment .expand {
+  position: absolute;
+  top: -13px;
+  right: -8px;
+  font-size: 14px;
+  font-weight: bold;
+}
+#changeset .content .diff table td.comment .expand:hover {
+  text-decoration: none;
+}
+#changeset .content .diff table td.comment .colorboxed {
   display: none;
 }
+#colorbox #cboxLoadedContent {
+  padding: 10px;
+}
+#colorbox #cboxContent {
+  position: relative;
+}
 #colorbox .avatar {
-  float: right;
+  position: absolute;
+  top: 0px;
+  right: 0px;
 }
 #colorbox .author {
   font-size: 20px;
@@ -669,3 +690,9 @@
   font-family: Monaco, Consolas, "Courier New", monospace;
   white-space: pre;
 }
+#colorbox .context .context-head {
+  color: #888888;
+  font-size: 20px;
+  margin-top: -16px;
+  margin-bottom: 18px;
+}
--- a/review/static/style.less	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/static/style.less	Thu Jul 01 19:32:49 2010 -0400
@@ -52,28 +52,38 @@
     .box-shadow(0px, 1px, 3px, rgba(0,0,0,0.1));
     .border-radius(4px);
     .multi-border((@bgcolor - #333), (@bgcolor - #444), (@bgcolor - #555));
-    &:active {
-        margin-top: 1px;
-        margin-bottom: -1px;
-    }
     &:focus {
-        .box-shadow(0px, 0px, 3px, rgba(100,100,200,0.9));
+        .box-shadow(0px, 0px, 4px, rgba(100,100,100,0.9));
     }
 }
 .button-span(@bgcolor: #ddd, @fcolor: #000000, @fsize: 14px, @lheight: 1.75) {
     display: inline-block;
     padding: 0 @fsize/2;
-    text-shadow: 0px 1px 1px (@bgcolor + #111);
+    text-shadow: 0px 1px 1px (@bgcolor - #080808);
     .border-radius(3px);
 }
 .button-hover(@bgcolor: #ddd, @fcolor: #000000, @fsize: 14px, @lheight: 1.75) {
     .multi-border(desaturate(darken(@bgcolor, 10%), 10%), desaturate(darken(@bgcolor, 20%), 20%), desaturate(darken(@bgcolor, 30%), 30%));
-    background-color: @bgcolor + #111;
+    background-color: @bgcolor;
 }
 .button-hover-span(@bgcolor: #ddd, @fcolor: #000000, @fsize: 14px, @lheight: 1.75) {
-    background-color: @bgcolor + #111;
+    background-color: @bgcolor;
 }
 
+.markdown {
+    p, ol, ul { 
+        &:last-child {
+            margin-bottom: 0;
+        }
+    }
+}
+.plain {
+    font-family: @font-mono;
+    font-size: 12px;
+    white-space: pre;
+}
+
+
 .group:after {
     clear:both; content:' '; display:block; font-size:0; line-height:0; visibility:hidden; width:0; height:0;
 }
@@ -224,6 +234,9 @@
             .border-radius(2px);
             border: 1px solid #444;
         }
+        &.cuddly {
+            margin-top: -13px;
+        }
     }
 }
 
@@ -409,10 +422,11 @@
         margin-bottom: 14px;
 
         .comment, .signoff {
-            padding: 8px 10px;
+            padding: 8px 12px 8px 10px;
             border-top: 1px solid #fff;
             border-bottom: 1px solid #ddd;
             position: relative;
+            min-height: 41px;
 
             &:first-child {
                 border-top: none;
@@ -424,24 +438,28 @@
                 float: right;
             }
             .message {
-                font-family: @font-mono;
-                font-size: 12px;
                 width: 690px;
                 padding-top: 3px;
-                white-space: pre;
-                overflow-x: auto;
             }
             .avatar img {
                 height: 30px;
                 width: 30px;
                 margin-top: 5px;
+                .border-radius(3px);
             }
             .expand {
                 position: absolute;
-                top: -4px;
-                right: 1px;
-                font-size: 16px;
+                top: 17px;
+                right: -18px;
+                font-size: 14px;
                 font-weight: bold;
+
+                &:hover {
+                    text-decoration: none;
+                }
+            }
+            .colorboxed {
+                display: none;
             }
         }
         .signoff {
@@ -593,15 +611,29 @@
                         img {
                             height: 30px;
                             width: 30px;
+                            .border-radius(3px);
                         }
                     }
-                    .message {
-                        white-space: pre;
-                        font-family: @font-mono;
-                    }
                     .author {
                         padding-bottom: 3px;
                     }
+                    .comment-content {
+                        position: relative;
+                    }
+                    .expand {
+                        position: absolute;
+                        top: -13px;
+                        right: -8px;
+                        font-size: 14px;
+                        font-weight: bold;
+
+                        &:hover {
+                            text-decoration: none;
+                        }
+                    }
+                    .colorboxed {
+                        display: none;
+                    }
                 }
             }
         }
@@ -609,11 +641,17 @@
 }
 
 #colorbox {
-    .expand {
-        display: none;
+    #cboxLoadedContent {
+        padding: 10px;
+    }
+    #cboxContent {
+        position: relative;
     }
     .avatar {
-        float: right;
+        position: absolute;
+        top: 0px;
+        right: 0px;
+
     }
     .author {
         font-size: 20px;
@@ -627,4 +665,12 @@
         font-family: @font-mono;
         white-space: pre;
     }
+    .context {
+        .context-head {
+            color: @c-light;
+            font-size: 20px;
+            margin-top: -16px;
+            margin-bottom: 18px;
+        }
+    }
 }
--- a/review/templates/base.html	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/templates/base.html	Thu Jul 01 19:32:49 2010 -0400
@@ -31,7 +31,7 @@
                             {% for name, path in datastore.repo.ui.configitems("paths") %}
                                 <form action="/push/" method="POST" id="remote-push-{{ name }}">
                                     <input type="hidden" name="path" value="{{ name }}" />
-                                    <a class="button" href="#"><span>push to {{ name }}</span></a>
+                                    <a class="button submit" href="#"><span>push to {{ name }}</span></a>
                                 </form>
                             {% endfor %}
                         </div>
@@ -40,7 +40,7 @@
                             {% for name, path in datastore.repo.ui.configitems("paths") %}
                                 <form action="/pull/" method="post">
                                     <input type="hidden" name="path" value="{{ name }}" />
-                                    <a class="button" href="#"><span>pull from {{ name }}</span></a>
+                                    <a class="button submit" href="#"><span>pull from {{ name }}</span></a>
                                 </form>
                             {% endfor %}
                         </div>
--- a/review/templates/changeset.html	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/templates/changeset.html	Thu Jul 01 19:32:49 2010 -0400
@@ -53,17 +53,7 @@
     {% endwith %}
 
     {% if not read_only or allow_anon %}
-        <div class="add-review-comment togglebox group">
-            <span class="activate"><a class="button" href="#"><span>Add a comment on this changeset</span></a></span>
-            <form class="disabled" id="comment-review-form" method="POST" action="">
-                <div class="field">
-                    <label class="infield" for="id_comment-review-form_body">Comment</label>
-                    <textarea autocomplete="off" id="id_comment-review-form_body" cols="60" rows="6" name="new-comment-body"></textarea>
-                </div>
-                <a class="submit button" href="#"><span>Post Comment</span></a>
-                <a class="cancel button" href="#"><span>Cancel</span></a>
-            </form>
-        </div>
+        {% include "pieces/forms/review-comment.html" %}
     {% endif %}
 
     <h2>Signoffs</h2>
@@ -74,21 +64,7 @@
         {% if signoffs %}
             <div class="signoffs item-listing">
                 {% for signoff in signoffs %}
-                    <div class="signoff group {{ signoff.opinion or 'neutral' }}">
-                        <div class="avatar">
-                            <img src="{{ utils['item_gravatar'](signoff, 30) }}" />
-                        </div>
-
-                        <div class="signoff-opinion {{ signoff.opinion or "neutral" }}">{{ signoff.opinion or "meh" }}</div>
-
-                        <div>
-                            <div class="author">
-                                <a href="mailto:{{ utils['email'](signoff.author) }}">{{ utils['templatefilters'].person(signoff.author) }}</a>
-                                signed off as <span class="opinion">{{ signoff.opinion or "neutral" }}</span> on this changeset, saying:
-                            </div>
-                            <div class="message">{{ signoff.message }}</div>
-                        </div>
-                    </div>
+                    {% include "pieces/signoff.html" %}
                 {% endfor %}
             </div>
         {% else %}
@@ -97,33 +73,7 @@
     {% endwith %}
 
     {% if not read_only %}
-        <div class="add-signoff togglebox group">
-            <span class="activate">
-                {% if cu_signoff %}
-                    <a class="button" href="#"><span>Change your signoff</span></a>
-                {% else %}
-                    <a class="button" href="#"><span>Sign off on this changeset</span></a>
-                {% endif %}
-            </span>
-            <form id="signoff-form" class="disabled" method="POST" action="">
-                <p class="sign-off-as">Sign off as:</p>
-                <div class="field">
-                    <input id="id_signoff-form_yes" type="radio" name="signoff" value="yes" {% if cu_signoff and cu_signoff.opinion == 'yes' %}checked{% endif %}/><label class="radio" for="id_signoff-form_yes">Yes</label>
-                    <input id="id_signoff-form_no"type="radio" name="signoff" value="no" {% if cu_signoff and cu_signoff.opinion == 'no' %}checked{% endif %}/><label class="radio" for="id_signoff-form_no">No</label>
-                    <input id="id_signoff-form_neutral"type="radio" name="signoff" value="neutral" {% if cu_signoff and cu_signoff.opinion == '' %}checked{% endif %}/><label class="radio" for="id_signoff-form_neutral">Neutral</label>
-                </div>
-                <div class="field">
-                    <label class="infield" for="id_signoff-form_body">Signoff message</label>
-                    <textarea autocomplete="off" id="id_signoff-form_body" cols="60" rows="6" name="new-signoff-body">{% if cu_signoff %}{{ cu_signoff.message }}{% endif %}</textarea>
-                </div>
-                {% if cu_signoff %}
-                    <a class="submit button" href="#"><span>Change Signoff</span></a>
-                {% else %}
-                    <a class="submit button" href="#"><span>Add Signoff</span></a>
-                {% endif %}
-                <a class="cancel button" href="#"><span>Cancel</span></a>
-            </form>
-        </div>
+        {% include "pieces/forms/signoff.html" %}
     {% endif %}
 
     <h2>Files</h2>
@@ -150,21 +100,8 @@
                 {% endif %}
 
                 {% if not read_only or allow_anon %}
-                    <div class="add-file-comment togglebox group">
-                        <span class="activate"><a class="button" href=""><span>Add a comment on this file</span></a></span>
-
-                        <form id="id_comment-file-form_{{ loop.index }}" class="disabled" method="POST" action="">
-                            <div class="field">
-                                <label class="infield" for="id_comment-file-form_{{ loop.index }}_body">Comment</label>
-                                <textarea autocomplete="off" id="id_comment-file-form_{{ loop.index }}_body" cols="60" rows="6" name="new-comment-body"></textarea>
-                            </div>
-
-                            <a class="submit button" href="#"><span>Post Comment</span></a>
-                            <a class="cancel button" href="#"><span>Cancel</span></a>
-
-                            <input type="hidden" name="filename" value="{{ filename }}" />
-                        </form>
-                    </div>
+                    {% set index = loop.index %}
+                    {% include "pieces/forms/file-comment.html" %}
                 {% endif %}
 
                 {% include "pieces/diff.html" %}
--- a/review/templates/pieces/comment.html	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/templates/pieces/comment.html	Thu Jul 01 19:32:49 2010 -0400
@@ -1,10 +1,15 @@
+{% if comment.style == 'markdown' %}
+    {% set rendered = utils['markdown'](comment.message) %}
+{% endif %}
+
 <div class="comment group" id="comment-{{ comment.identifier }}">
-    <a href="#comment-{{ comment.identifier }}" class="expand" id="comment-expand-{{ comment.identifier }}">+</a>
+    <a href="#comment-{{ comment.identifier }}" rel="comments" class="expand" id="comment-expand-{{ comment.identifier }}">&rarr;</a>
     <script type="text/javascript">
         $(function() {
-            $("#comment-expand-{{ comment.identifier }}").colorbox({inline: true, href: "#comment-{{ comment.identifier }}"});
+            $("#comment-expand-{{ comment.identifier }}").colorbox({inline: true, href: "#comment-{{ comment.identifier }}-colorboxed"});
         });
     </script>
+
     <div class="avatar">
         <img src="{{ utils['item_gravatar'](comment, 30) }}" />
     </div>
@@ -13,6 +18,31 @@
             <a href="mailto:{{ utils['email'](comment.author) }}">{{ utils['templatefilters'].person(comment.author) }}</a>
             said:
         </div>
-        <div class="message">{{ comment.message }}</div>
+
+        {% if comment.style == 'markdown' %}
+            <div class="message markdown">{{ rendered|safe }}</div>
+        {% else %}
+            <div class="message plain">{{ comment.message }}</div>
+        {% endif %}
+    </div>
+
+    <div id="comment-{{ comment.identifier }}-colorboxed" class="colorboxed">
+        <div class="avatar">
+            <img src="{{ utils['item_gravatar'](comment, 30) }}" />
+        </div>
+        <div>
+            <div class="author">
+                <a href="mailto:{{ utils['email'](comment.author) }}">{{ utils['templatefilters'].person(comment.author) }}</a>
+                said:
+            </div>
+            <div class="context">
+                {% if comment.filename %}
+                    <div class="context-head">on {{ comment.filename }}</div>
+                {% else %}
+                    <div class="context-head">on changeset {{ rev.rev() }}:{{ utils['node_short'](rev.node()) }}</div>
+                {% endif %}
+            </div>
+            <div class="message">{{ comment.message }}</div>
+        </div>
     </div>
 </div>
--- a/review/templates/pieces/diff.html	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/templates/pieces/diff.html	Thu Jul 01 19:32:49 2010 -0400
@@ -47,24 +47,7 @@
                     {% set comments = line['comments'] %}
 
                     {% for comment in comments %}
-                        <tr class="comment">
-                            <td class="comment group" colspan="3">
-                                <span class="commentlines disabled">{{ ','.join(utils['map'](utils['str'], comment.lines)) }}</span>
-                                <div class="avatar">
-                                    <img src="{{ utils['item_gravatar'](comment, 30) }}" />
-                                </div>
-
-                                <div>
-                                    <div class="author">
-                                        <a href="mailto:{{ utils['email'](comment.author) }}">
-                                            {{ utils['templatefilters'].person(comment.author) }}
-                                        </a>
-                                        said:
-                                    </div>
-                                    <div class="message">{{ comment.message }}</div>
-                                </div>
-                            </td>
-                        </tr>
+                        {% include "pieces/linecomment.html" %}
                     {% endfor %}
                 {% endwith %}
             {% endif %}
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/review/templates/pieces/forms/file-comment.html	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,20 @@
+<div class="add-file-comment togglebox group">
+    <span class="activate"><a class="button" href=""><span>Add a comment on this file</span></a></span>
+
+    <form id="id_comment-file-form_{{ index }}" class="disabled" method="POST" action="">
+        <div class="field">
+            <label class="infield" for="id_comment-file-form_{{ index }}_body">Comment</label>
+            <textarea autocomplete="off" id="id_comment-file-form_{{ index }}_body" cols="60" rows="6" name="new-comment-body"></textarea>
+        </div>
+
+        <div class="field cuddly">
+            <input type="checkbox" class="checkbox" name="comment-markdown" id="id_comment-file-form_{{ index }}_markdown" checked="checked" />
+            <label for="id_comment-file-form_{{ index }}_markdown">Use Markdown to format this comment.</label>
+        </div>
+
+        <a class="submit button" href="#"><span>Post Comment</span></a>
+        <a class="cancel button" href="#"><span>Cancel</span></a>
+
+        <input type="hidden" name="filename" value="{{ filename }}" />
+    </form>
+</div>
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/review/templates/pieces/forms/review-comment.html	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,16 @@
+<div class="add-review-comment togglebox group">
+    <span class="activate"><a class="button" href="#"><span>Add a comment on this changeset</span></a></span>
+    <form class="disabled" id="comment-review-form" method="POST" action="">
+        <div class="field">
+            <label class="infield" for="id_comment-review-form_body">Comment</label>
+            <textarea autocomplete="off" id="id_comment-review-form_body" cols="60" rows="6" name="new-comment-body"></textarea>
+        </div>
+        <div class="field cuddly">
+            <input type="checkbox" class="checkbox" name="comment-markdown" id="id_comment-review-form_markdown" checked="checked" />
+            <label for="id_comment-review-form_markdown">Use Markdown to format this comment.</label>
+
+        </div>
+        <a class="submit button" href="#"><span>Post Comment</span></a>
+        <a class="cancel button" href="#"><span>Cancel</span></a>
+    </form>
+</div>
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/review/templates/pieces/forms/signoff.html	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,36 @@
+<div class="add-signoff togglebox group">
+    <span class="activate">
+        {% if cu_signoff %}
+            <a class="button" href="#"><span>Change your signoff</span></a>
+        {% else %}
+            <a class="button" href="#"><span>Sign off on this changeset</span></a>
+        {% endif %}
+    </span>
+    <form id="signoff-form" class="disabled" method="POST" action="">
+        <p class="sign-off-as">Sign off as:</p>
+
+        <div class="field">
+            <input id="id_signoff-form_yes" type="radio" name="signoff" value="yes" {% if cu_signoff and cu_signoff.opinion == 'yes' %}checked{% endif %}/><label class="radio" for="id_signoff-form_yes">Yes</label>
+            <input id="id_signoff-form_no"type="radio" name="signoff" value="no" {% if cu_signoff and cu_signoff.opinion == 'no' %}checked{% endif %}/><label class="radio" for="id_signoff-form_no">No</label>
+            <input id="id_signoff-form_neutral"type="radio" name="signoff" value="neutral" {% if cu_signoff and cu_signoff.opinion == '' %}checked{% endif %}/><label class="radio" for="id_signoff-form_neutral">Neutral</label>
+        </div>
+
+        <div class="field">
+            <label class="infield" for="id_signoff-form_body">Signoff message</label>
+            <textarea autocomplete="off" id="id_signoff-form_body" cols="60" rows="6" name="new-signoff-body">{% if cu_signoff %}{{ cu_signoff.message }}{% endif %}</textarea>
+        </div>
+
+        <div class="field cuddly">
+            <input type="checkbox" class="checkbox" name="signoff-markdown" id="id_signoff-form_markdown" checked="checked" />
+            <label for="id_signoff-form_markdown">Use Markdown to format this message.</label>
+
+        </div>
+        {% if cu_signoff %}
+            <input type="hidden" value="{{ cu_signoff.identifier }}" name="current"/>
+            <a class="submit button" href="#"><span>Change Signoff</span></a>
+        {% else %}
+            <a class="submit button" href="#"><span>Add Signoff</span></a>
+        {% endif %}
+        <a class="cancel button" href="#"><span>Cancel</span></a>
+    </form>
+</div>
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/review/templates/pieces/linecomment.html	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,50 @@
+{% if comment.style == 'markdown' %}
+    {% set rendered = utils['markdown'](comment.message) %}
+{% endif %}
+
+<tr class="comment">
+    <td class="comment group" colspan="3" id="comment-{{ comment.identifier }}">
+        <div class="comment-content">
+            <span class="commentlines disabled">{{ ','.join(utils['map'](utils['str'], comment.lines)) }}</span>
+            <a href="#comment-{{ comment.identifier }}" rel="comments" class="expand" id="comment-expand-{{ comment.identifier }}">&rarr;</a>
+            <script type="text/javascript">
+                $(function() {
+                    $("#comment-expand-{{ comment.identifier }}").colorbox({inline: true, href: "#comment-{{ comment.identifier }}-colorboxed"});
+                });
+            </script>
+            <div class="avatar">
+                <img src="{{ utils['item_gravatar'](comment, 30) }}" />
+            </div>
+
+            <div>
+                <div class="author">
+                    <a href="mailto:{{ utils['email'](comment.author) }}">
+                        {{ utils['templatefilters'].person(comment.author) }}
+                    </a>
+                    said:
+                </div>
+
+                {% if comment.style == 'markdown' %}
+                    <div class="message markdown">{{ rendered|safe }}</div>
+                {% else %}
+                    <div class="message plain">{{ comment.message }}</div>
+                {% endif %}
+            </div>
+        </div>
+        <div id="comment-{{ comment.identifier }}-colorboxed" class="colorboxed">
+            <div class="avatar">
+                <img src="{{ utils['item_gravatar'](comment, 30) }}" />
+            </div>
+            <div>
+                <div class="author">
+                    <a href="mailto:{{ utils['email'](comment.author) }}">{{ utils['templatefilters'].person(comment.author) }}</a>
+                    said:
+                </div>
+                <div class="context">
+                    <div class="context-head">in {{ comment.filename }}</div>
+                </div>
+                <div class="message">{{ comment.message }}</div>
+            </div>
+        </div>
+    </td>
+</tr>
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/review/templates/pieces/signoff.html	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,24 @@
+{% if signoff.style == 'markdown' %}
+    {% set rendered = utils['markdown'](signoff.message) %}
+{% endif %}
+
+<div class="signoff group {{ signoff.opinion or 'neutral' }}">
+    <div class="avatar">
+        <img src="{{ utils['item_gravatar'](signoff, 30) }}" />
+    </div>
+
+    <div class="signoff-opinion {{ signoff.opinion or "neutral" }}">{{ signoff.opinion or "meh" }}</div>
+
+    <div>
+        <div class="author">
+            <a href="mailto:{{ utils['email'](signoff.author) }}">{{ utils['templatefilters'].person(signoff.author) }}</a>
+            signed off as <span class="opinion">{{ signoff.opinion or "neutral" }}</span> on this changeset, saying:
+        </div>
+
+        {% if signoff.style == 'markdown' %}
+            <div class="message markdown">{{ rendered|safe }}</div>
+        {% else %}
+            <div class="message plain">{{ signoff.message }}</div>
+        {% endif %}
+    </div>
+</div>
--- a/review/tests/sample_data.py	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/tests/sample_data.py	Thu Jul 01 19:32:49 2010 -0400
@@ -3,11 +3,13 @@
       'file_two': 'this is another test file',
       'long_file': 'a\nb\nc\nd\ne\nf\ng\nh\ni\nj\nk\nl\nm\no\np\nq\nr\ns\nt',
       'always_changing': 'this\nfile\nalways\nchanges',
+      'always_changing2': 'this\nfile\nalways\nchanges',
     },
     { 'file_one': 'hello again\nworld\nfoo\nbar',
       'file_three': 'this is a new file\nfor testing\npurposes\nonly',
       'test_dir/test_file': 'this file is inside\nof a directory\n\nponies!',
       'long_file': 'a\nb\nc\nd\ne\nf\nX\nh\ni\nj\nk\nl\nY\no\np\nq\nr\ns\nt',
-      'always_changing': 'this\nfile\nALWAYS\nchanges',
+      'always_changing': 'THIS\nFILE\nALWAYS\nCHANGES',
+      'always_changing2': 'THIS\nFILE\nALWAYS\nCHANGES',
     },
-]
\ No newline at end of file
+]
--- a/review/tests/test_check.py	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/tests/test_check.py	Thu Jul 01 19:32:49 2010 -0400
@@ -1,59 +1,32 @@
-from nose import *
-from util import *
-from mercurial import util as hgutil
-from .. import messages
+from nose import with_setup
+from util import setup_reviewed_sandbox, teardown_sandbox, review, should_fail_with
+from util import get_identifiers
 
-BAD_ERROR = 'The correct error message was not printed.'
-def _check_e(e, m):
-    error = str(e)
-    assert m in e
-
-def _should_fail_with(m, **kwargs):
-    try:
-        output = review(**kwargs)
-    except hgutil.Abort, e:
-        _check_e(e, m)
-    else:
-        assert False, BAD_ERROR
-
+from .. import messages
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_check_empty():
-    review(check=True)
-
-    output = review(check=True, verbose=True)
-    assert messages.CHECK_SUCCESS in output
+    def t(rev):
+        output = review(check=True, rev=rev)
+        assert not output
 
-    review(signoff=True, no=True, message='.')
-    output = review(check=True, verbose=True)
-    assert messages.CHECK_SUCCESS in output
+        output = review(check=True, verbose=True, rev=rev)
+        assert messages.CHECK_SUCCESS in output
 
-    review(signoff=True, yes=False, message='.', force=True)
-    output = review(check=True, verbose=True)
-    assert messages.CHECK_SUCCESS in output
-
-    review(comment=True, message='.')
-    output = review(check=True, verbose=True)
-    assert messages.CHECK_SUCCESS in output
+        review(signoff=True, no=True, message='.', rev=rev)
+        output = review(check=True, verbose=True, rev=rev)
+        assert messages.CHECK_SUCCESS in output
 
-@with_setup(setup_reviewed_sandbox, teardown_sandbox)
-def test_check_empty_non_tip():
-    review(rev='0', check=True)
-
-    output = review(rev='0', check=True, verbose=True)
-    assert messages.CHECK_SUCCESS in output
+        i = get_identifiers(rev)[0]
+        review(edit=i, yes=True, rev=rev)
+        output = review(check=True, verbose=True, rev=rev)
+        assert messages.CHECK_SUCCESS in output
 
-    review(signoff=False, message='.')
-    output = review(rev='0', check=True, verbose=True)
-    assert messages.CHECK_SUCCESS in output
-
-    review(signoff=True, message='.')
-    output = review(rev='0', check=True, verbose=True)
-    assert messages.CHECK_SUCCESS in output
-
-    review(comment=True, message='.')
-    output = review(rev='0', check=True, verbose=True)
-    assert messages.CHECK_SUCCESS in output
+        review(comment=True, message='.', rev=rev)
+        output = review(check=True, verbose=True, rev=rev)
+        assert messages.CHECK_SUCCESS in output
+    t('.')
+    t('0')
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_check_no_nos():
@@ -61,15 +34,16 @@
     assert messages.CHECK_SUCCESS in output
 
     review(signoff=True, no=True, message='.')
-    _should_fail_with(messages.CHECK_HAS_NOS, check=True, verbose=True, no_nos=True)
+    should_fail_with(messages.CHECK_HAS_NOS, check=True, verbose=True, no_nos=True)
 
-    review(signoff=True, yes=True, message='.', force=True)
+    i = get_identifiers()[0]
+    review(edit=i, yes=True)
     output = review(check=True, verbose=True, no_nos=True)
     assert messages.CHECK_SUCCESS in output
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_check_yeses():
-    _should_fail_with(messages.CHECK_TOO_FEW_YESES, check=True, verbose=True, yeses='1')
+    should_fail_with(messages.CHECK_TOO_FEW_YESES, check=True, verbose=True, yeses='1')
 
     output = review(check=True, verbose=True, yeses='0')
     assert messages.CHECK_SUCCESS in output
@@ -78,21 +52,22 @@
     output = review(check=True, verbose=True, yeses='1')
     assert messages.CHECK_SUCCESS in output
 
-    _should_fail_with(messages.CHECK_TOO_FEW_YESES, check=True, verbose=True, yeses='2')
+    should_fail_with(messages.CHECK_TOO_FEW_YESES, check=True, verbose=True, yeses='2')
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_check_seen():
-    _should_fail_with(messages.CHECK_UNSEEN, check=True, verbose=True, seen=True)
+    should_fail_with(messages.CHECK_UNSEEN, check=True, verbose=True, seen=True)
 
     review(signoff=True, yes=True, message='.')
     output = review(check=True, verbose=True, seen=True)
     assert messages.CHECK_SUCCESS in output
 
-    review(signoff=True, no=True, message='.', force=True)
+    i = get_identifiers()[0]
+    review(edit=i, no=True)
     output = review(check=True, verbose=True, seen=True)
     assert messages.CHECK_SUCCESS in output
 
-    _should_fail_with(messages.CHECK_UNSEEN, rev='0', check=True, verbose=True, seen=True)
+    should_fail_with(messages.CHECK_UNSEEN, rev='0', check=True, verbose=True, seen=True)
 
     review(rev='0', comment=True, message='.')
     output = review(check=True, verbose=True, seen=True)
@@ -101,24 +76,25 @@
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_check_priority_no_nos():
     review(signoff=True, no=True, message='.')
-    _should_fail_with(messages.CHECK_HAS_NOS, check=True, verbose=True, no_nos=True, yeses='0')
-    _should_fail_with(messages.CHECK_HAS_NOS, check=True, verbose=True, no_nos=True, seen=True)
-    _should_fail_with(messages.CHECK_HAS_NOS, check=True, verbose=True, no_nos=True, seen=True, yeses='0')
+    should_fail_with(messages.CHECK_HAS_NOS, check=True, verbose=True, no_nos=True, yeses='0')
+    should_fail_with(messages.CHECK_HAS_NOS, check=True, verbose=True, no_nos=True, seen=True)
+    should_fail_with(messages.CHECK_HAS_NOS, check=True, verbose=True, no_nos=True, seen=True, yeses='0')
 
-    review(signoff=True, yes=True, message='.', force=True)
+    i = get_identifiers()[0]
+    review(edit=i, yes=True)
     output = review(check=True, verbose=True, no_nos=True, seen=True, yeses='0')
     assert messages.CHECK_SUCCESS in output
 
     review(rev='0', signoff=True, no=True, message='.')
     review(rev='0', comment=True, message='.')
-    _should_fail_with(messages.CHECK_HAS_NOS, rev='0', check=True, verbose=True, no_nos=True)
-    _should_fail_with(messages.CHECK_HAS_NOS, rev='0', check=True, verbose=True, no_nos=True, seen=True)
-    _should_fail_with(messages.CHECK_HAS_NOS, rev='0', check=True, verbose=True, no_nos=True, seen=True, yeses='0')
+    should_fail_with(messages.CHECK_HAS_NOS, rev='0', check=True, verbose=True, no_nos=True)
+    should_fail_with(messages.CHECK_HAS_NOS, rev='0', check=True, verbose=True, no_nos=True, seen=True)
+    should_fail_with(messages.CHECK_HAS_NOS, rev='0', check=True, verbose=True, no_nos=True, seen=True, yeses='0')
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_check_priority_yeses():
     review(comment=True, message='.')
-    _should_fail_with(messages.CHECK_TOO_FEW_YESES, check=True, verbose=True, yeses='1', seen=True)
+    should_fail_with(messages.CHECK_TOO_FEW_YESES, check=True, verbose=True, yeses='1', seen=True)
 
     review(signoff=True, yes=True, message='.')
     output = review(check=True, verbose=True, yeses='1', seen=True)
--- a/review/tests/test_comment.py	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/tests/test_comment.py	Thu Jul 01 19:32:49 2010 -0400
@@ -1,11 +1,16 @@
-from nose import *
-from util import *
+import os
+
+from nose import with_setup
+from util import setup_reviewed_sandbox, teardown_sandbox, review, should_fail_with
+from util import get_datastore_repo, get_sandbox_repo, get_ui
+from util import check_comment_exists_on_line
+
 from .. import api, messages
 
-import os
-from mercurial import util as hgutil
 from mercurial.node import hex
 
+# TODO: Figure out how to handle external editors nicely with nose.
+
 a1, a2 = (messages.REVIEW_LOG_COMMENT_AUTHOR % '|').split('|')
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
@@ -13,26 +18,6 @@
     output = review()
     assert messages.REVIEW_LOG_COMMENTS % (0, 0) in output
 
-
-# TODO: Figure out how to handle external editors nicely with nose.
-#@with_setup(setup_reviewed_sandbox, teardown_sandbox)
-#def test_blank_comment():
-    #try:
-        #review(comment=True, message=' \t\n')
-    #except hgutil.Abort, e:
-        #error = str(e)
-        #assert messages.COMMENT_REQUIRES_MESSAGE in error
-    #else:
-        #assert False, 'The correct error message was not printed.'
-
-    #try:
-        #review(comment=True, message=messages.COMMENT_EDITOR_MESSAGE)
-    #except hgutil.Abort, e:
-        #error = str(e)
-        #assert messages.COMMENT_REQUIRES_MESSAGE in error
-    #else:
-        #assert False, 'The correct error message was not printed.'
-
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_comment_formatting():
     review(comment=True, message=' \tTest comment one.\t ')
@@ -42,6 +27,7 @@
     assert messages.REVIEW_LOG_COMMENT_LINE % ' \tTest comment one.' not in output
     assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.\t ' not in output
     assert messages.REVIEW_LOG_COMMENT_LINE % ' \tTest comment one.\t ' not in output
+
     review(rev=0, comment=True,
            message=' \tTest\n  indented\n\ttabindented\noutdented  \ndone\t ')
     output = review(rev=0)
@@ -52,6 +38,12 @@
     assert messages.REVIEW_LOG_COMMENT_LINE % 'outdented  ' in output
     assert messages.REVIEW_LOG_COMMENT_LINE % 'done' in output
 
+@with_setup(setup_reviewed_sandbox, teardown_sandbox)
+def test_comment_styles():
+    review(comment=True, message='Test comment one.', mdown=True)
+    output = review()
+
+    assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.' in output
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_add_comments_to_parent_rev():
@@ -97,17 +89,17 @@
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_add_comments_to_file():
-    review(comment=True, message='Test comment one.', rev='1', files=['file_one'])
+    review(comment=True, message='Test comment one.', rev='1', args=['file_one'])
 
-    output = review(rev='1', files=['file_one'])
+    output = review(rev='1', args=['file_one'])
     assert a1 in output
     assert a2 in output
     assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.' in output
 
-    output = review(rev='1', files=['file_two'])
+    output = review(rev='1', args=['file_two'])
     assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.' not in output
 
-    output = review(rev='0', files=['file_one'])
+    output = review(rev='0', args=['file_one'])
     assert a1 not in output
     assert a2 not in output
     assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.' not in output
@@ -115,91 +107,66 @@
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_add_comments_to_multiple_files():
     review(comment=True, message='Test comment.', rev='1',
-        files=['file_one', 'always_changing'])
+        args=['file_one', 'always_changing'])
 
     output = review(rev='1')
     assert output.count(messages.REVIEW_LOG_COMMENT_LINE % 'Test comment.') == 2
 
-    try:
-        review(comment=True, rev='1', message='Test bad comment.', lines='1',
-            files=['file_one', 'always_changing'])
-    except hgutil.Abort, e:
-        error = str(e)
-        assert messages.COMMENT_LINES_REQUIRE_FILE in error
-    else:
-        assert False, 'The correct error message was not printed.'
+    should_fail_with(messages.COMMENT_LINES_REQUIRE_FILE,
+                     comment=True, rev='1', message='Test bad comment.', lines='1',
+                     args=['file_one', 'always_changing'])
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_add_comments_to_bad_file():
-    try:
-        review(comment=True, message='Test comment one.', files=['bad'])
-    except hgutil.Abort, e:
-        error = str(e)
-        assert messages.COMMENT_FILE_DOES_NOT_EXIST % ('bad', '2') in error
-    else:
-        assert False, 'The correct error message was not printed.'
+    should_fail_with(messages.COMMENT_FILE_DOES_NOT_EXIST % ('bad', '2'),
+                     comment=True, message='Test comment one.', args=['bad'])
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_add_comments_to_file_line():
-    try:
-        review(comment=True, rev='1', message='Test bad comment.', lines='1')
-    except hgutil.Abort, e:
-        error = str(e)
-        assert messages.COMMENT_LINES_REQUIRE_FILE in error
-    else:
-        assert False, 'The correct error message was not printed.'
+    should_fail_with(messages.COMMENT_LINES_REQUIRE_FILE,
+                     comment=True, rev='1', message='Test bad comment.', lines='1')
 
     review(comment=True, rev='1', message='Test comment one.',
-        files=['file_one'], lines='1')
+        args=['file_one'], lines='1')
 
-    output = review(rev='1', files=['file_one'])
+    output = review(rev='1', args=['file_one'])
 
     # Make sure the comment is present at all.
     assert a1 in output
     assert a2 in output
     assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.' in output
 
-    # Make sure it's in the correct place
-    output = output.splitlines()
-    for n, line in enumerate(output):
-        if line.startswith('#'):
-            assert output[n-1].strip().startswith('1')
-            break
+    check_comment_exists_on_line(1, files=['file_one'], rev='1')
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_add_comments_to_file_lines():
     review(comment=True, rev='1', message='Test comment one.',
-        files=['file_one'], lines='1,2')
+        args=['file_one'], lines='1,2')
 
-    output = review(rev='1', files=['file_one'])
+    output = review(rev='1', args=['file_one'])
 
     # Make sure the comment is present at all.
     assert a1 in output
     assert a2 in output
     assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.' in output
 
-    # Make sure it's in the correct place
-    output = output.splitlines()
-    for n, line in enumerate(output):
-        if line.startswith('#'):
-            assert output[n-1].strip().startswith('2')
-            break
+    check_comment_exists_on_line(2, files=['file_one'], rev='1')
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_add_comments_to_file_in_subdir():
     filename = os.path.join('test_dir', 'test_file')
 
-    review(comment=True, message='Test comment one.', rev='1', files=[filename])
+    review(comment=True, message='Test comment one.', rev='1', args=[filename])
 
-    output = review(rev='1', files=[filename])
+    output = review(rev='1', args=[filename])
     assert a1 in output
     assert a2 in output
     assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.' in output
 
-    output = review(rev='1', files=['file_two'])
+    output = review(rev='1', args=['file_two'])
     assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.' not in output
 
-    output = review(rev='0', files=[filename])
+    output = review(rev='0', args=[filename])
     assert a1 not in output
     assert a2 not in output
     assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.' not in output
@@ -207,17 +174,17 @@
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_add_comments_to_file_in_cwd():
     os.chdir('test_dir')
-    review(comment=True, message='Test comment one.', rev='1', files=['test_file'])
+    review(comment=True, message='Test comment one.', rev='1', args=['test_file'])
 
-    output = review(rev='1', files=['test_file'])
+    output = review(rev='1', args=['test_file'])
     assert a1 in output
     assert a2 in output
     assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.' in output
 
-    output = review(rev='1', files=['file_two'])
+    output = review(rev='1', args=['file_two'])
     assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.' not in output
 
-    output = review(rev='0', files=['test_file'])
+    output = review(rev='0', args=['test_file'])
     assert a1 not in output
     assert a2 not in output
     assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.' not in output
@@ -227,24 +194,24 @@
     filename = os.path.join('..', 'file_three')
 
     os.chdir('test_dir')
-    review(comment=True, message='Test comment one.', rev='1', files=[filename])
+    review(comment=True, message='Test comment one.', rev='1', args=[filename])
 
-    output = review(rev='1', files=[filename])
+    output = review(rev='1', args=[filename])
     assert a1 in output
     assert a2 in output
     assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.' in output
 
-    output = review(rev='1', files=['file_two'])
+    output = review(rev='1', args=['file_two'])
     assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.' not in output
 
-    output = review(rev='0', files=[filename])
+    output = review(rev='0', args=[filename])
     assert a1 not in output
     assert a2 not in output
     assert messages.REVIEW_LOG_COMMENT_LINE % 'Test comment one.' not in output
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_comment_identifiers():
-    review(comment=True, message='Test comment one.', rev='1', files=['file_one'])
+    review(comment=True, message='Test comment one.', rev='1', args=['file_one'])
 
     rd = api.ReviewDatastore(get_ui(), get_sandbox_repo())
     dsr = get_datastore_repo()
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/review/tests/test_delete.py	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,107 @@
+from nose import with_setup
+from util import setup_reviewed_sandbox, teardown_sandbox, review, should_fail_with
+from util import get_identifiers
+
+from .. import messages
+
+@with_setup(setup_reviewed_sandbox, teardown_sandbox)
+def test_delete_invalid():
+    should_fail_with(messages.REQUIRES_IDS, delete=True)
+    should_fail_with(messages.UNKNOWN_ID % 'a', delete=True, args=['a'])
+
+    review(comment=True, message='test')
+
+    should_fail_with(messages.UNKNOWN_ID % 'z', delete=True, args=['z'])
+
+    # Use the pidgeonhole princicple to create ambiguous identifiers.
+    for i in range(17):
+        review(comment=True, message='test%d' % i)
+
+    ids = get_identifiers()
+    id_map = {}
+    for i in ids:
+        id_map[i[0]] = id_map.get(i[0], 0) + 1
+    i = str(filter(lambda k: id_map[k] > 1, id_map.keys())[0])
+
+    should_fail_with(messages.AMBIGUOUS_ID % i, delete=True, args=[i])
+
+@with_setup(setup_reviewed_sandbox, teardown_sandbox)
+def test_delete_comment():
+    def t(rev):
+        review(rev=rev, comment=True, message='test')
+        i = get_identifiers(rev)[0]
+
+        output = review(delete=True, args=[i])
+        assert not output
+        output = review(rev=rev, verbose=True)
+        assert '(%s)\n' % i not in output
+
+        review(rev=rev, comment=True, message='test2')
+        review(rev=rev, comment=True, message='test3')
+        i1, i2 = get_identifiers(rev)
+
+        output = review(delete=True, args=[i1])
+        assert not output
+        output = review(rev=rev, verbose=True)
+        assert '(%s)\n' % i1 not in output
+        assert '(%s)\n' % i2 in output
+
+        review(rev=rev, comment=True, message='test4')
+        i1, i2 = get_identifiers(rev)
+
+        output = review(delete=True, args=[i1, i2])
+        assert not output
+        output = review(rev=rev, verbose=True)
+        assert '(%s)\n' % i1 not in output
+        assert '(%s)\n' % i2 not in output
+    t('.')
+    t('0')
+
+@with_setup(setup_reviewed_sandbox, teardown_sandbox)
+def test_delete_signoff():
+    # TODO: test multiple signoff deletions
+    review(signoff=True, message='test')
+    i = get_identifiers()[0]
+
+    output = review(delete=True, args=[i])
+    assert not output
+    output = review(verbose=True)
+    assert '(%s)\n' % i not in output
+
+    review(comment=True, message='test2')
+    review(signoff=True, message='test3')
+    i1, i2 = get_identifiers()
+
+    output = review(delete=True, args=[i2])
+    assert not output
+    output = review(verbose=True)
+    assert '(%s)\n' % i1 in output
+    assert '(%s)\n' % i2 not in output
+
+@with_setup(setup_reviewed_sandbox, teardown_sandbox)
+def test_delete_both():
+    def t(rev):
+        review(rev=rev, signoff=True, message='test')
+        review(rev=rev, comment=True, message='test')
+        ids = get_identifiers(rev)
+
+        output = review(rev=rev, delete=True, args=ids)
+        assert not output
+        output = review(rev=rev, verbose=True)
+        assert '(%s)\n' % ids[0] not in output
+        assert '(%s)\n' % ids[1] not in output
+
+        review(rev=rev, signoff=True, message='test2')
+        review(rev=rev, comment=True, message='test3')
+        review(rev=rev, comment=True, message='test4')
+        ids = get_identifiers(rev)
+
+        output = review(rev=rev, delete=True, args=ids[:2])
+        assert not output
+        output = review(rev=rev, verbose=True)
+        assert '(%s)\n' % ids[0] not in output
+        assert '(%s)\n' % ids[1] not in output
+        assert '(%s)\n' % ids[2] in output
+    t('.')
+    t('0')
+
--- a/review/tests/test_diffs.py	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/tests/test_diffs.py	Thu Jul 01 19:32:49 2010 -0400
@@ -1,5 +1,6 @@
-from nose import *
-from util import *
+from nose import with_setup
+from util import setup_reviewed_sandbox, teardown_sandbox, review
+
 from .. import messages
 
 a1, a2 = (messages.REVIEW_LOG_COMMENT_AUTHOR % '|').split('|')
@@ -7,7 +8,7 @@
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_review_diff_default_context():
-    output = review(rev='1', files=['long_file'], unified='5')
+    output = review(rev='1', args=['long_file'], unified='5')
 
     assert ' 0:' not in output
     assert messages.REVIEW_LOG_SKIPPED % 1 in output
@@ -25,7 +26,7 @@
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_review_diff_full_context():
-    output = review(rev='1', files=['long_file'], unified='10000')
+    output = review(rev='1', args=['long_file'], unified='10000')
 
     assert s1 not in output
     assert s2 not in output
@@ -35,7 +36,7 @@
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_review_diff_small_context():
-    output = review(rev='1', files=['long_file'], unified='2')
+    output = review(rev='1', args=['long_file'], unified='2')
 
     assert ' 3:' not in output
     assert messages.REVIEW_LOG_SKIPPED % 4 in output
@@ -55,9 +56,9 @@
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_review_diff_with_comment():
     review(comment=True, rev='1', message='Test comment one.',
-        files=['long_file'], lines='6,7')
+        args=['long_file'], lines='6,7')
 
-    output = review(rev='1', files=['long_file'], unified=0)
+    output = review(rev='1', args=['long_file'], unified=0)
 
     # Make sure the comment is present at all.
     assert a1 in output
@@ -74,9 +75,9 @@
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_review_diff_with_skipped_comment():
     review(comment=True, rev='1', message='Test comment one.',
-        files=['long_file'], lines='3')
+        args=['long_file'], lines='3')
 
-    output = review(rev='1', files=['long_file'], unified=0)
+    output = review(rev='1', args=['long_file'], unified=0)
 
     # Make sure the comment is present at all.
     assert a1 in output
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/review/tests/test_edit.py	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,201 @@
+import time
+
+from nose import with_setup
+from util import setup_reviewed_sandbox, teardown_sandbox, review, should_fail_with
+from util import get_identifiers, check_comment_exists_on_line
+
+from .. import messages
+
+@with_setup(setup_reviewed_sandbox, teardown_sandbox)
+def test_edit_invalid():
+    should_fail_with(messages.UNKNOWN_ID % 'z', edit='z')
+
+    review(comment=True, message='test')
+
+    should_fail_with(messages.UNKNOWN_ID % 'z', edit='z')
+
+    # Use the pidgeonhole princicple to create ambiguous identifiers.
+    for i in range(17):
+        review(comment=True, message='test%d' % i)
+
+    ids = get_identifiers()
+    id_map = {}
+    for i in ids:
+        id_map[i[0]] = id_map.get(i[0], 0) + 1
+    i = str(filter(lambda k: id_map[k] > 1, id_map.keys())[0])
+
+    should_fail_with(messages.AMBIGUOUS_ID % i, edit=i)
+
+
+@with_setup(setup_reviewed_sandbox, teardown_sandbox)
+def test_touch_comment():
+    def t(rev):
+        review(rev=rev, comment=True, message='test', args=['always_changing'], lines='1')
+        i = get_identifiers(rev)[0]
+
+        # This sucks, but we need to do it to support testing "touch" edits.
+        time.sleep(1.1)
+
+        output = review(edit=i)
+        assert not output
+        output = review(rev=rev, verbose=True)
+        assert '(%s)' % i not in output
+        assert len(get_identifiers(rev)) == 1
+        assert messages.REVIEW_LOG_COMMENT_LINE % 'test' in output
+
+        check_comment_exists_on_line(1, files=['always_changing'], rev='1')
+    t('1')
+    t('0')
+
+@with_setup(setup_reviewed_sandbox, teardown_sandbox)
+def test_edit_comment_message():
+    def t(rev):
+        review(rev=rev, comment=True, message='test')
+        i = get_identifiers(rev)[0]
+
+        output = review(edit=i, message='edited')
+        assert not output
+        output = review(rev=rev, verbose=True)
+        assert '(%s)' % i not in output
+        assert len(get_identifiers(rev)) == 1
+        assert messages.REVIEW_LOG_COMMENT_LINE % 'test' not in output
+        assert messages.REVIEW_LOG_COMMENT_LINE % 'edited' in output
+    t('.')
+    t('0')
+
+@with_setup(setup_reviewed_sandbox, teardown_sandbox)
+def test_edit_comment_lines():
+    def t(rev):
+        review(rev=rev, comment=True, message='test', args=['always_changing'], lines='1')
+        i = get_identifiers(rev)[0]
+
+        output = review(edit=i, lines='3')
+        assert not output
+        output = review(rev=rev, verbose=True)
+        assert '(%s)' % i not in output
+        assert len(get_identifiers(rev)) == 1
+        assert messages.REVIEW_LOG_COMMENT_LINE % 'test' in output
+        check_comment_exists_on_line(3, files=['always_changing'], rev=rev)
+
+        i = get_identifiers(rev)[0]
+
+        output = review(edit=i, lines='1,2')
+        assert not output
+        output = review(rev=rev, verbose=True)
+        assert '(%s)' % i not in output
+        assert len(get_identifiers(rev)) == 1
+        assert messages.REVIEW_LOG_COMMENT_LINE % 'test' in output
+        check_comment_exists_on_line(2, files=['always_changing'], rev=rev)
+    t('1')
+    t('0')
+
+@with_setup(setup_reviewed_sandbox, teardown_sandbox)
+def test_edit_comment_filename():
+    def t(rev):
+        review(rev=rev, comment=True, message='test', args=['always_changing'], lines='1')
+        i = get_identifiers(rev)[0]
+
+        output = review(edit=i, args=['always_changing2'])
+        assert not output
+
+        output = review(rev=rev, verbose=True, args=['always_changing'])
+        assert '(%s)' % i not in output
+        assert len(get_identifiers(rev, files=['always_changing'])) == 0
+        assert messages.REVIEW_LOG_COMMENT_LINE % 'test' not in output
+
+        output = review(rev=rev, verbose=True, args=['always_changing2'])
+        assert '(%s)' % i not in output
+        assert len(get_identifiers(rev, files=['always_changing2'])) == 1
+        assert messages.REVIEW_LOG_COMMENT_LINE % 'test' in output
+    t('1')
+    t('0')
+
+@with_setup(setup_reviewed_sandbox, teardown_sandbox)
+def test_edit_comment_everything():
+    def t(rev):
+        review(rev=rev, comment=True, message='test', args=['always_changing'], lines='1')
+        i = get_identifiers(rev)[0]
+
+        output = review(edit=i, args=['always_changing2'], message='edited', lines='2')
+        assert not output
+
+        output = review(rev=rev, verbose=True, args=['always_changing'])
+        assert '(%s)' % i not in output
+        assert len(get_identifiers(rev, files=['always_changing'])) == 0
+        assert messages.REVIEW_LOG_COMMENT_LINE % 'test' not in output
+
+        output = review(rev=rev, verbose=True, args=['always_changing2'])
+        assert '(%s)' % i not in output
+        assert len(get_identifiers(rev, files=['always_changing2'])) == 1
+        assert messages.REVIEW_LOG_COMMENT_LINE % 'test' not in output
+        assert messages.REVIEW_LOG_COMMENT_LINE % 'edited' in output
+        check_comment_exists_on_line(2, files=['always_changing'], rev=rev)
+    t('1')
+    t('0')
+
+
+@with_setup(setup_reviewed_sandbox, teardown_sandbox)
+def test_touch_signoff():
+    def t(rev):
+        review(rev=rev, signoff=True, message='test', yes=True)
+        i = get_identifiers(rev)[0]
+
+        # This sucks, but we need to do it to support testing "touch" edits.
+        time.sleep(1.1)
+
+        output = review(edit=i)
+        assert not output
+        output = review(rev=rev, verbose=True)
+        assert '(%s)' % i not in output
+        assert len(get_identifiers(rev)) == 1
+        assert messages.REVIEW_LOG_SIGNOFF_LINE % 'test' in output
+        assert 'yes' in output
+    t('1')
+    t('0')
+
+@with_setup(setup_reviewed_sandbox, teardown_sandbox)
+def test_edit_signoff_message():
+    def t(rev):
+        review(rev=rev, signoff=True, message='test')
+        i = get_identifiers(rev)[0]
+
+        output = review(edit=i, message='edited')
+        assert not output
+        output = review(rev=rev, verbose=True)
+        assert '(%s)' % i not in output
+        assert len(get_identifiers(rev)) == 1
+        assert messages.REVIEW_LOG_SIGNOFF_LINE % 'test' not in output
+        assert messages.REVIEW_LOG_SIGNOFF_LINE % 'edited' in output
+    t('.')
+    t('0')
+
+@with_setup(setup_reviewed_sandbox, teardown_sandbox)
+def test_edit_signoff_opinion():
+    def t(rev):
+        review(rev=rev, signoff=True, message='test')
+
+        output = review(rev=rev, verbose=True)
+        assert 'as yes'     not in output
+        assert 'as neutral'     in output
+        assert 'as no'      not in output
+
+        i = get_identifiers(rev)[0]
+        output = review(edit=i, yes=True)
+        assert not output
+
+        output = review(rev=rev, verbose=True)
+        assert 'as yes'         in output
+        assert 'as neutral' not in output
+        assert 'as no'      not in output
+
+        i = get_identifiers(rev)[0]
+        output = review(edit=i, no=True)
+        assert not output
+
+        output = review(rev=rev, verbose=True)
+        assert 'as yes'     not in output
+        assert 'as neutral' not in output
+        assert 'as no'          in output
+    t('.')
+    t('0')
+
--- a/review/tests/test_init.py	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/tests/test_init.py	Thu Jul 01 19:32:49 2010 -0400
@@ -1,12 +1,16 @@
 from __future__ import with_statement
-from nose import *
-from util import *
+
+import os
+
+from nose import with_setup
+
+from util import setup_reviewed_sandbox, teardown_sandbox, review, should_fail_with
+from util import setup_sandbox, get_datastore_repo, get_sandbox_repo
+from util import clone_sandbox_repo, sandbox_clone_path
+
 from .. import messages
 from .. import api
 
-import os
-from mercurial import util as hgutil
-
 @with_setup(setup_sandbox, teardown_sandbox)
 def test_init():
     sandbox = get_sandbox_repo()
@@ -25,25 +29,12 @@
 
 @with_setup(setup_sandbox, teardown_sandbox)
 def test_init_without_remote_path():
-    try:
-        review(init=True)
-    except hgutil.Abort, e:
-        error = str(e)
-        assert messages.INIT_REQUIRES_REMOTE_PATH in error
-    else:
-        assert False, 'The correct error message was not printed.'
+    should_fail_with(messages.INIT_REQUIRES_REMOTE_PATH, init=True)
 
 @with_setup(setup_sandbox, teardown_sandbox)
 def test_init_twice():
     review(init=True, remote_path='/sandbox-review')
-
-    try:
-        review(init=True, remote_path='/sandbox-review')
-    except hgutil.Abort, e:
-        error = str(e)
-        assert messages.INIT_EXISTS_UNCOMMITTED in error
-    else:
-        assert False, 'The correct error message was not printed.'
+    should_fail_with(messages.INIT_EXISTS_UNCOMMITTED, init=True, remote_path='/sandbox-review')
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_init_clone():
--- a/review/tests/test_signoff.py	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/tests/test_signoff.py	Thu Jul 01 19:32:49 2010 -0400
@@ -1,8 +1,12 @@
-from nose import *
-from util import *
-from mercurial import util as hgutil
+from nose import with_setup
+from util import setup_reviewed_sandbox, teardown_sandbox, review, should_fail_with
+from util import get_datastore_repo, get_sandbox_repo, get_ui
+
+from .. import api, messages
+
 from mercurial.node import hex
-from .. import messages
+
+# TODO: Figure out how to handle external editors nicely with nose.
 
 s1, s2 = (messages.REVIEW_LOG_SIGNOFF_AUTHOR % ('|', 'neutral')).split('|')
 sy1, sy2 = (messages.REVIEW_LOG_SIGNOFF_AUTHOR % ('|', 'yes')).split('|')
@@ -13,25 +17,6 @@
     output = review()
     assert messages.REVIEW_LOG_SIGNOFFS % (0, 0, 0, 0) in output
 
-# TODO: Figure out how to handle external editors nicely with nose.
-#@with_setup(setup_reviewed_sandbox, teardown_sandbox)
-#def test_blank_signoff():
-    #try:
-        #review(signoff=True, message=' \t\n')
-    #except hgutil.Abort, e:
-        #error = str(e)
-        #assert messages.SIGNOFF_REQUIRES_MESSAGE in error
-    #else:
-        #assert False, 'The correct error message was not printed.'
-
-    #try:
-        #review(signoff=True, message=messages.SIGNOFF_EDITOR_MESSAGE)
-    #except hgutil.Abort, e:
-        #error = str(e)
-        #assert messages.SIGNOFF_REQUIRES_MESSAGE in error
-    #else:
-        #assert False, 'The correct error message was not printed.'
-
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_signoff_formatting():
     review(signoff=True, message=' \tTest signoff one.\t ')
@@ -41,6 +26,7 @@
     assert messages.REVIEW_LOG_SIGNOFF_LINE % ' \tTest signoff one.' not in output
     assert messages.REVIEW_LOG_SIGNOFF_LINE % 'Test signoff one.\t ' not in output
     assert messages.REVIEW_LOG_SIGNOFF_LINE % ' \tTest signoff one.\t ' not in output
+
     review(rev=0, signoff=True,
            message=' \tTest\n  indented\n\ttabindented\noutdented  \ndone\t ')
     output = review(rev=0)
@@ -51,6 +37,12 @@
     assert messages.REVIEW_LOG_SIGNOFF_LINE % 'outdented  ' in output
     assert messages.REVIEW_LOG_SIGNOFF_LINE % 'done' in output
 
+@with_setup(setup_reviewed_sandbox, teardown_sandbox)
+def test_signoff_styles():
+    review(signoff=True, message='Test signoff one.', mdown=True)
+    output = review()
+
+    assert messages.REVIEW_LOG_SIGNOFF_LINE % 'Test signoff one.' in output
 
 @with_setup(setup_reviewed_sandbox, teardown_sandbox)
 def test_signoff_on_parent_rev():
@@ -77,15 +69,7 @@
 def test_multiple_signoffs():
     review(signoff=True, message='Test signoff one.')
 
-    try:
-        review(signoff=True, message='Test signoff two.')
-    except hgutil.Abort, e:
-        error = str(e)
-        assert messages.SIGNOFF_EXISTS in error
-    else:
-        assert False, 'The correct error message was not printed.'
-
-    review(signoff=True, message='Test signoff two.', force=True)
+    should_fail_with(messages.SIGNOFF_EXISTS, signoff=True, message='Test signoff two.')
 
     output = review()
     assert messages.REVIEW_LOG_SIGNOFFS % (1, 0, 0, 1) in output
--- a/review/tests/util.py	Tue Jun 15 20:30:23 2010 -0400
+++ b/review/tests/util.py	Thu Jul 01 19:32:49 2010 -0400
@@ -5,30 +5,29 @@
 import os, shutil
 import sample_data
 from mercurial import cmdutil, commands, hg, ui
-from .. import api, extension_ui
+from mercurial import util as hgutil
+from .. import api, cli, messages
 
 _ui = ui.ui()
 _ui.setconfig('extensions', 'progress', '!')
 def review(init=False, comment=False, signoff=False, check=False, yes=False,
     no=False, force=False, message='', rev='.', remote_path='', lines='',
-    files=None, unified='5', web=False, verbose=False, debug=False, seen=False,
-    yeses='', no_nos=False):
+    args=None, unified='5', web=False, verbose=False, debug=False, mdown=False,
+    seen=False, yeses='', no_nos=False, delete=False, edit=''):
 
-    if not files:
-        files = []
+    args = args or []
 
     _ui.pushbuffer()
     if debug:
         _ui.debugflag = True
     elif verbose:
         _ui.verbose = True
-    extension_ui.review(
-        _ui, get_sandbox_repo(), *files,
+    cli.review(_ui, get_sandbox_repo(), *args,
         **dict(
             init=init, comment=comment, signoff=signoff, check=check, yes=yes,
             no=no, force=force, message=message, rev=rev, remote_path=remote_path,
-            lines=lines, unified=unified, web=web, seen=seen, yeses=yeses,
-            no_nos=no_nos
+            lines=lines, unified=unified, web=web, mdown=mdown, seen=seen,
+            yeses=yeses, no_nos=no_nos, delete=delete, edit=edit
         )
     )
     _ui.verbose, _ui.debugflag = False, False
@@ -104,7 +103,7 @@
     return hg.repository(_ui, sandbox_clone_path)
 
 def clone_sandbox_repo():
-    hg.clone(cmdutil.remoteui(_ui, {}), sandbox_repo_path, sandbox_clone_path)
+    hg.clone(hg.remoteui(_ui, {}), sandbox_repo_path, sandbox_clone_path)
 
 def get_datastore_repo(path=api.DEFAULT_DATASTORE_DIRNAME):
     return hg.repository(_ui, path)
@@ -112,3 +111,47 @@
 def get_ui():
     return _ui
 
+WRONG_ERROR = '''\
+The wrong error was printed.
+
+Expected: %s
+Actual:   %s'''
+BAD_ERROR = 'The correct error message was not printed.'
+def _check_e(e, m):
+    error = str(e)
+    assert m in e, WRONG_ERROR % (repr(m), repr(error))
+
+
+def should_fail_with(m, **kwargs):
+    try:
+        output = review(**kwargs)
+    except hgutil.Abort, e:
+        _check_e(e, m)
+    else:
+        assert False, BAD_ERROR
+
+
+a1, a2 = (messages.REVIEW_LOG_COMMENT_AUTHOR % '|').split('|')
+s1, s2, s3 = (messages.REVIEW_LOG_SIGNOFF_AUTHOR % ('|', '|')).split('|')
+
+def get_identifiers(rev='.', files=[]):
+    return [l.split(' ')[-1].strip('()\n')
+            for l in review(rev=rev, verbose=True, args=files).splitlines()
+            if (a1 in l and a2 in l) or (s1 in l and s2 in l and s3 in l)]
+
+
+COMMENT_LINE_ERROR = '''\
+Expected a comment on line %d:
+
+%s
+%s
+'''
+def check_comment_exists_on_line(n, files=[], rev='.'):
+    output = review(rev=rev, args=files).splitlines()
+    for i, line in enumerate(output):
+        if line.startswith('#'):
+            assert output[i-1].strip().startswith(str(n)), \
+                   COMMENT_LINE_ERROR % (n, output[i-1].rstrip(),
+                                         output[i].rstrip())
+            break
+
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/review/web.py	Thu Jul 01 19:32:49 2010 -0400
@@ -0,0 +1,204 @@
+from __future__ import with_statement
+
+"""The review extension's web UI."""
+
+import sys, os
+from hashlib import md5
+
+from mercurial import commands, hg, templatefilters
+from mercurial.node import short
+from mercurial.util import email
+
+import api, messages
+
+def unbundle():
+    package_path = os.path.split(os.path.realpath(__file__))[0]
+    template_path = os.path.join(package_path, 'web_templates')
+    media_path = os.path.join(package_path, 'web_media')
+    top_path = os.path.split(package_path)[0]
+    bundled_path = os.path.join(top_path, 'bundled')
+    flask_path = os.path.join(bundled_path, 'flask')
+    jinja2_path = os.path.join(bundled_path, 'jinja2')
+    werkzeug_path = os.path.join(bundled_path, 'werkzeug')
+    simplejson_path = os.path.join(bundled_path, 'simplejson')
+    markdown2_path = os.path.join(bundled_path, 'markdown2', 'lib')
+
+    sys.path.insert(0, flask_path)
+    sys.path.insert(0, werkzeug_path)
+    sys.path.insert(0, jinja2_path)
+    sys.path.insert(0, simplejson_path)
+    sys.path.insert(0, markdown2_path)
+
+unbundle()
+
+import markdown2
+from flask import Flask
+from flask import abort, g, redirect, render_template, request
+app = Flask(__name__)
+
+LOG_PAGE_LEN = 15
+
+def _item_gravatar(item, size=30):
+    return 'http://www.gravatar.com/avatar/%s?s=%d' % (md5(email(item.author)).hexdigest(), size)
+
+def _cset_gravatar(cset, size=30):
+    return 'http://www.gravatar.com/avatar/%s?s=%d' % (md5(email(cset.user())).hexdigest(), size)
+
+def _line_type(line):
+    return 'rem' if line[0] == '-' else 'add' if line[0] == '+' else 'con'
+
+def _categorize_signoffs(signoffs):
+    return { 'yes': len(filter(lambda s: s.opinion == 'yes', signoffs)),
+             'no': len(filter(lambda s: s.opinion == 'no', signoffs)),
+             'neutral': len(filter(lambda s: s.opinion == '', signoffs)),}
+
+markdowner = markdown2.Markdown(safe_mode='escape', extras=['code-friendly', 'pyshell', 'imgless'])
+utils = {
+    'node_short': short,
+    'md5': md5,
+    'email': email,
+    'templatefilters': templatefilters,
+    'len': len,
+    'item_gravatar': _item_gravatar,
+    'cset_gravatar': _cset_gravatar,
+    'line_type': _line_type,
+    'categorize_signoffs': _categorize_signoffs,
+    'map': map,
+    'str': str,
+    'decode': lambda s: s.decode('utf-8'),
+    'markdown': markdowner.convert,
+}
+
+def _render(template, **kwargs):
+    return render_template(template, read_only=app.read_only,
+        allow_anon=app.allow_anon, utils=utils, datastore=g.datastore,
+        title=app.title, **kwargs)
+
+
+@app.before_request
+def load_datastore():
+    g.datastore = api.ReviewDatastore(app.ui, hg.repository(app.ui, app.repo.root))
+
+@app.route('/')
+def index_newest():
+    return index(-1)
+
+@app.route('/<int:rev_max>/')
+def index(rev_max):
+    tip = g.datastore.target['tip'].rev()
+
+    if rev_max > tip or rev_max < 0:
+        rev_max = tip
+
+    rev_min = rev_max - LOG_PAGE_LEN if rev_max >= LOG_PAGE_LEN else 0
+    if rev_min < 0:
+        rev_min = 0
+
+    older = rev_min - 1 if rev_min > 0 else -1
+    newer = rev_max + LOG_PAGE_LEN + 1 if rev_max < tip else -1
+    if newer > tip:
+        newer = tip
+
+    rcsets = [g.datastore[r] for r in xrange(rev_max, rev_min - 1, -1)]
+    return _render('index.html', rcsets=rcsets, newer=newer, older=older)
+
+
+def _handle_signoff(revhash):
+    signoff = request.form['signoff']
+
+    if signoff not in ['yes', 'no', 'neutral']:
+        abort(400)
+
+    if signoff == 'neutral':
+        signoff = ''
+
+    body = request.form.get('new-signoff-body', '')
+    style = 'markdown' if request.form.get('signoff-markdown') else ''
+
+    current = request.form.get('current')
+    if current:
+        g.datastore.edit_signoff(current, body, signoff, style=style)
+    else:
+        rcset = g.datastore[revhash]
+        rcset.add_signoff(body, signoff, style=style)
+
+    return redirect("%s/changeset/%s/" % (app.site_root, revhash))
+
+def _handle_comment(revhash):
+    filename = request.form.get('filename', '')
+
+    lines = str(request.form.get('lines', ''))
+    if lines:
+        lines = filter(None, [l.strip() for l in lines.split(',')])
+
+    body = request.form['new-comment-body']
+    style = 'markdown' if request.form.get('comment-markdown') else ''
+    
+    if body:
+        rcset = g.datastore[revhash]
+        rcset.add_comment(body, filename, lines, style=style)
+    
+    return redirect("%s/changeset/%s/" % (app.site_root, revhash))
+
+@app.route('/changeset/<revhash>/', methods=['GET', 'POST'])
+def changeset(revhash):
+    if request.method == 'POST':
+        signoff = request.form.get('signoff', None)
+        if signoff and not app.read_only:
+            return _handle_signoff(revhash)
+        elif not app.read_only or app.allow_anon:
+            return _handle_comment(revhash)
+    
+    rcset = g.datastore[revhash]
+    rev = rcset.target[revhash]
+    
+    cu_signoffs = rcset.signoffs_for_current_user()
+    cu_signoff = cu_signoffs[0] if cu_signoffs else None
+    
+    tip = g.datastore.target['tip'].rev()
+    newer = rcset.target[rev.rev() + 1] if rev.rev() < tip else None
+    older = rcset.target[rev.rev() - 1] if rev.rev() > 0 else None
+    
+    return _render('changeset.html', rcset=rcset, rev=rev, cu_signoff=cu_signoff,
+        newer=newer, older=older)
+
+
+@app.route('/pull/', methods=['POST'])
+def pull():
+    if not app.read_only:
+        path = request.form['path']
+        from hgext import fetch
+        fetch.fetch(g.datastore.repo.ui, g.datastore.repo, path, rev=[],
+                    message=messages.FETCH, switch_parent=True, user='', date='')
+    return redirect('%s/' % app.site_root)
+
+@app.route('/push/', methods=['POST'])
+def push():
+    if not app.read_only:
+        path = request.form['path']
+        commands.push(g.datastore.repo.ui, g.datastore.repo, path)
+    return redirect('%s/' % app.site_root)
+
+
+def load_interface(ui, repo, read_only=False, allow_anon=False,
+        open=False, address='127.0.0.1', port=8080):
+    if open:
+        import webbrowser
+        webbrowser.open('http://localhost:%d/' % port)
+        
+    app.read_only = read_only
+    app.debug = ui.debugflag
+    app.allow_anon = allow_anon
+    app.site_root = ''
+
+    if app.allow_anon:
+        ui.setconfig('ui', 'username', 'Anonymous <anonymous@example.com>')
+
+    app.ui = ui
+    app.repo = repo
+    app.title = os.path.basename(repo.root)
+
+    if app.debug:
+        from flaskext.lesscss import lesscss
+        lesscss(app)
+    app.run(host=address, port=port)
--- a/review/web_ui.py	Tue Jun 15 20:30:23 2010 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,187 +0,0 @@
-from __future__ import with_statement
-
-"""The review extension's web UI."""
-
-import sys, os
-from hashlib import md5
-
-from mercurial import commands, hg, templatefilters
-from mercurial.node import short
-from mercurial.util import email
-
-import api
-
-def unbundle():
-    package_path = os.path.split(os.path.realpath(__file__))[0]
-    template_path = os.path.join(package_path, 'web_templates')
-    media_path = os.path.join(package_path, 'web_media')
-    top_path = os.path.split(package_path)[0]
-    bundled_path = os.path.join(top_path, 'bundled')
-    flask_path = os.path.join(bundled_path, 'flask')
-    jinja2_path = os.path.join(bundled_path, 'jinja2')
-    werkzeug_path = os.path.join(bundled_path, 'werkzeug')
-    simplejson_path = os.path.join(bundled_path, 'simplejson')
-
-    sys.path.insert(0, flask_path)
-    sys.path.insert(0, werkzeug_path)
-    sys.path.insert(0, jinja2_path)
-    sys.path.insert(0, simplejson_path)
-
-unbundle()
-
-from flask import Flask
-from flask import abort, g, redirect, render_template, request
-app = Flask(__name__)
-
-LOG_PAGE_LEN = 15
-
-def _item_gravatar(item, size=30):
-    return 'http://www.gravatar.com/avatar/%s?s=%d' % (md5(email(item.author)).hexdigest(), size)
-
-def _cset_gravatar(cset, size=30):
-    return 'http://www.gravatar.com/avatar/%s?s=%d' % (md5(email(cset.user())).hexdigest(), size)
-
-def _line_type(line):
-    return 'rem' if line[0] == '-' else 'add' if line[0] == '+' else 'con'
-
-def _categorize_signoffs(signoffs):
-    return { 'yes': len(filter(lambda s: s.opinion == 'yes', signoffs)),
-             'no': len(filter(lambda s: s.opinion == 'no', signoffs)),
-             'neutral': len(filter(lambda s: s.opinion == '', signoffs)),}
-utils = {
-    'node_short': short,
-    'md5': md5,
-    'email': email,
-    'templatefilters': templatefilters,
-    'len': len,
-    'item_gravatar': _item_gravatar,
-    'cset_gravatar': _cset_gravatar,
-    'line_type': _line_type,
-    'categorize_signoffs': _categorize_signoffs,
-    'map': map,
-    'str': str,
-    'decode': lambda s: s.decode('utf-8'),
-}
-
-def _render(template, **kwargs):
-    return render_template(template, read_only=app.read_only,
-        allow_anon=app.allow_anon, utils=utils, datastore=g.datastore,
-        title=app.title, **kwargs)
-
-
-@app.before_request
-def load_datastore():
-    g.datastore = api.ReviewDatastore(app.ui, hg.repository(app.ui, app.repo.root))
-
-@app.route('/')
-def index_newest():
-    return index(-1)
-
-@app.route('/<int:rev_max>/')
-def index(rev_max):
-    tip = g.datastore.target['tip'].rev()
-
-    if rev_max > tip or rev_max < 0:
-        rev_max = tip
-
-    rev_min = rev_max - LOG_PAGE_LEN if rev_max >= LOG_PAGE_LEN else 0
-    if rev_min < 0:
-        rev_min = 0
-
-    older = rev_min - 1 if rev_min > 0 else -1
-    newer = rev_max + LOG_PAGE_LEN + 1 if rev_max < tip else -1
-    if newer > tip:
-        newer = tip
-
-    rcsets = [g.datastore[r] for r in xrange(rev_max, rev_min - 1, -1)]
-    return _render('index.html', rcsets=rcsets, newer=newer, older=older)
-
-
-def _handle_signoff(revhash):
-    signoff = request.form.get('signoff', None)
-
-    if signoff not in ['yes', 'no', 'neutral']:
-        abort(400)
-
-    if signoff == 'neutral':
-        signoff = ''
-
-    body = request.form.get('new-signoff-body', '')
-    rcset = g.datastore[revhash]
-    rcset.add_signoff(body, signoff, force=True)
-
-    return redirect("%s/changeset/%s/" % (app.site_root, revhash))
-
-def _handle_comment(revhash):
-    filename = request.form.get('filename', '')
-    lines = str(request.form.get('lines', ''))
-    if lines:
-        lines = filter(None, [l.strip() for l in lines.split(',')])
-    body = request.form['new-comment-body']
-    
-    if body:
-        rcset = g.datastore[revhash]
-        rcset.add_comment(body, filename, lines)
-    
-    return redirect("%s/changeset/%s/" % (app.site_root, revhash))
-
-@app.route('/changeset/<revhash>/', methods=['GET', 'POST'])
-def changeset(revhash):
-    if request.method == 'POST':
-        signoff = request.form.get('signoff', None)
-        if signoff and not app.read_only:
-            return _handle_signoff(revhash)
-        elif not app.read_only or app.allow_anon:
-            return _handle_comment(revhash)
-    
-    rcset = g.datastore[revhash]
-    rev = rcset.target[revhash]
-    
-    cu_signoffs = rcset.signoffs_for_current_user()
-    cu_signoff = cu_signoffs[0] if cu_signoffs else None
-    
-    tip = g.datastore.target['tip'].rev()
-    newer = rcset.target[rev.rev() + 1] if rev.rev() < tip else None
-    older = rcset.target[rev.rev() - 1] if rev.rev() > 0 else None
-    
-    return _render('changeset.html', rcset=rcset, rev=rev, cu_signoff=cu_signoff,
-        newer=newer, older=older)
-
-
-@app.route('/pull/', methods=['POST'])
-def pull():
-    if not app.read_only:
-        path = request.form['path']
-        commands.pull(g.datastore.repo.ui, g.datastore.repo, path, update=True)
-    return redirect('%s/' % app.site_root)
-
-@app.route('/push/', methods=['POST'])
-def push():
-    if not app.read_only:
-        path = request.form['path']
-        commands.push(g.datastore.repo.ui, g.datastore.repo, path)
-    return redirect('%s/' % app.site_root)
-
-
-def load_interface(ui, repo, read_only=False, allow_anon=False,
-        open=False, address='127.0.0.1', port=8080):
-    if open:
-        import webbrowser
-        webbrowser.open('http://localhost:%d/' % port)
-        
-    app.read_only = read_only
-    app.debug = ui.debugflag
-    app.allow_anon = allow_anon
-    app.site_root = ''
-
-    if app.allow_anon:
-        ui.setconfig('ui', 'username', 'Anonymous <anonymous@example.com>')
-
-    app.ui = ui
-    app.repo = repo
-    app.title = os.path.basename(repo.root)
-
-    if app.debug:
-        from flaskext.lesscss import lesscss
-        lesscss(app)
-    app.run(host=address, port=port)