Some TeX Developments

Coding in the TeX world

Co-ordinating biblatex style development

with 4 comments

Bibliographies formatting is complicated, and over the years this has led to the development of a lot of BibTeX styles and tools like custom-bib. For biblatex users the output style can be altered from within LaTeX, and while there is not yet an equivalent of custom-bib for biblatex there is some excellent advice available for developing new styles.

One of the key ideas of  is to make it possible to address the citation styles in fields where BibTeX could never provide the correct tools. That shows up most obviously in humanities, where the position of citations in the output needs to feed back into how they appear. Life gets even more complicated when you consider areas that need specialised fields included in their bibliographies: a classic example is law. There are already developments under-way to help support these situation, for example Biber-based flexible data model. Addressing the needs of style authors properly requires co-ordination between the core biblatex team and the style creators, and between different style authors. The question of how to achieve this communication came up recently on TeX-sx.

There are at least a couple of obvious ways for people to stay in touch with each other. First, where there is a need for a specific function to be added to biblatex, the package GitHub site is the place to go. Adding issues makes sure that they don’t get lost in direct e-mails, and also ensures that anyone with a point of view can comment. Secondly, for more open discuss TUG have a bibliography mailing list. To date, this has not been very heavy in terms of traffic, but it would be an ideal forum to co-ordinate effort. It’s publicly archived and anyone can join, so there is no danger of loosing important comments.

Of course, there are other ways to communicate too! All of the team are active on TeX-sx, and when we can we pick up biblatex issues and add them to the to do list. Direct e-mail works too, and I’m sure some style authors will go for direct communication. The key thing is to use the power of biblatex to make life easier for the end-user.

Written by Joseph Wright

May 16th, 2012 at 8:19 am

Posted in biblatex

LaTeX3 gets a new FPU

without comments

For a while now we’ve been promising to update the floating point functions in LaTeX3 to provide an expandable approach to calculations. We are now making good progress in swapping the old code for the new material. There are still a few things to be decided, but the general plan looks pretty clear.

The new code lets you parse floating point expressions in the same way you can for integers. So you can write

\fp_eval:n { 10 / 3  + 0.35 }

and get the output

3.683333333333333

without needing to worry about working with dimensions. Even more usefully, there are a range of built in mathematical functions, so things like

\fp_eval:n { sin ( pi / 3 ) }

gives

0.8660254037844386

as you’d want.

You might pick up from the above output that the new FPU works to a higher accuracy than the old one: 16 significant figures. That’s been done without a bad performance hit: great credit goes to Bruno Le Floch for the work on this. Of course, you don’t have to have 16 figure output: the code includes rounding functions and various display options, so for example

\fp_eval:n { round ( sin ( pi / 3 ) , 3 ) }

At the moment you’ll have to grab the latest development code to get this new version as part of expl3, but it will be in the next CTAN snapshot some time in June. You might also find that not every function you want implemented is available: for example, there are currently no inverse trigonometric functions. Those are all on the to do list, but there not needed for typesetting so may be a little while.

Written by Joseph Wright

May 15th, 2012 at 3:58 pm

Posted in LaTeX3

Tagged with floating point

The LPPL: ‘maintainer’ or ‘author-maintained’

with 4 comments

The LaTeX Project Public License (LPPL) was written to allow development of LaTeX code in a way that is free as in speech while also making sure that LaTeX users get what they expect when they

\usepackage{foo}

Frank Mittelbach wrote an excellent overview of how the LPPL was developed for TUGBoat last year, where he discussed the balance between the rights of the coder and the rights of the person using the code!

The LPPL is designed to deal with packages which are both ‘maintained’ and ‘unmaintained’, and provides a way to allow new maintainers to take over unmaintained material. It also allows for packages to be ‘author-maintained’, which sounds similar to just ‘maintained’ but is significantly different.

Packages which are either ‘maintained’  or  ‘author-maintained’ have one or more people looking after it, and they are responsible for making changes to the code. The difference comes if those people disappear (as has happened recently with biblatex). With a ‘maintained’ package, it’s possible for a new person or team to make a public statement that they are going to take over, then after a delay they become the new maintainers. On the other hand, a package which is ‘author-maintained’ cannot be taken over by someone else.

Now, the LPPL is a ‘free’ license and so it is always possible to create a fork from an existing package. In general, you don’t really want to do that simply to keep updating an existing package: taking over maintenance is much clearer all round. For biblatex, we can’t do that as it’s ‘author-maintained’. So we’re going to have to formally fork the project, mark the ‘new’ version as distinct from the old (Philipp Lehman) version and ensure that both remain on CTAN: not ideal.

So I’d urge people to mark their LPPL code as ‘maintained’  rather than  ‘author-maintained’.  You never know what might happen, and ‘maintained’ status works pretty well.

Written by Joseph Wright

May 4th, 2012 at 10:27 am

Posted in LaTeX

Tagged with LPPL

babel gets back on track

without comments

Posted today to the LaTeX-L list by Javier Bezos

Babel gets back on track and it is again actively maintained. The goals are mainly to fix bugs, to make it compatible with XeTeX and LuaTeX (as far as possible), and perhaps to add some minor new features (provided they are backward compatible).

No attempt will be done to take full advantage of the features provided by XeTeX and LuaTeX, which would require a completely new core (as for example polyglossia or as part of LaTeX3).

Your comments or suggestions (or questions!) are welcomed.

Hopefully we’ll see some long-standing babel bugs sorted out in the near future: many thanks to Javier for taking this up!

Written by Joseph Wright

May 1st, 2012 at 5:52 pm

Posted in LaTeX

Tagged with babel

Goodies from DANTE

with 2 comments

spacer I recently discovered that you can read issues most of Die TeXnische Komödie online. My German is good enough to get the gist of the articles, and there is some interesting stuff there. I’ve always meant to join DANTE, if only to support their core CTAN server, and so this seemed like an ideal opportunity. So in the post this week I got the various goodies that they send out: I particularly like the sticker!

What’s particular interesting for me is the section in Die TeXnische Komödie detailing the various TeX-related meet-ups that take place in Germany: there are a lot! Here in the UK, I’m pleased that UK-TUG manages a couple of training courses and a single speaker meeting/AGM every year. It’s quite a contrast!

Written by Joseph Wright

April 30th, 2012 at 8:13 pm

Posted in General

Programming LaTeX3: More on expansion

without comments

In the last post, I looked at the idea of expandability, and how we can use x-type expansion to exhaustively expand an argument. I also said that there was more to this, and hinted at two other argument specifications, f- and o-type expansion. We need these because TeX is a macro-expansion language, and while LaTeX3 coding does hide some of this detail it certainly does not get rid of all of it. Both of these forms of expansion are somewhat specialised, but both are also necessary!

Full (or forced) expansion

The f-type (‘full’ or ‘force’) expansion argument is in some ways similar to the x-type concept, as it’s about trying to expand as much as possible. So

\tl_set:Nn \l_tmpa_tl { foo }
\tl_set:Nn \l_tmpb_tl { \l_tmpa_tl }
\tl_set:Nx \l_tmpc_tl{ \l_tmpb_tl }
\tl_show:N \l_tmpc_tl

and

\tl_set:Nn \l_tmpa_tl { foo }
\tl_set:Nn \l_tmpb_tl { \l_tmpa_tl }
\tl_set:Nf \l_tmpc_tl{ \l_tmpb_tl }
\tl_show:N \l_tmpc_tl

give the same result: everything is expanded, and \l_tmpc_tl contains ‘foo’. There are two crucial differences, however. First, x-type variants are not expandable, even if their parent function was. On the other hand, f-type expansion is itself expandable. So something like

\cs_new:Npn \my_function:n { \tl_length:n {#1} }
\int_eval:n { \exp_args:Nf \my_function:n { \l_tmpa_tl } + 1 }

will work as we’d want: \l_tmpa_tl will be expanded, then processed by \my_function:n and the result will be evaluated as an integer. Try that with \exp_args:Nx and it will fail.

The second difference is what happens when we hit a non-expandable token. With x-type expansion, TeX will look at the next thing in the input, and so tries to expand everything in the input (hence ‘exhaustive’). On the other hand, f-type expansion stops when the first non-expandable token is found. So

\tl_set:Nn \l_tmpa_tl { foo }
\tl_set:Nn \l_tmpb_tl { bar }
\tl_set:Nf \l_tmpc_tl { \l_tmpa_tl \l_tmpb_tl }
\tl_show:N \l_tmpc_tl

will show

foo\l_tmpb_tl

That happens because the f in foo is not expandable. So f-type expansion stops before it gets to \l_tmpb_tl, while x-type expansion would keep going.

The second point is important as it means that some functions will not give the expected result if used inside an f-type expansion. We show this in the code documentation for LaTeX3: functions which fully expand inside both f- and x-type expansions are shown with a hollow star, while those that only work inside an x-type expansion are shown with a filled star.

As you might pick up, f-type expansion is somewhat specialised. It’s useful when creating expandable commands, but for non-expandable ones x-type expansion is usually more appropriate.

Expanding just the once

As TeX is a macro expansion language, there are some tasks that are best carried out, or even only doable, using an exact number of expansions. To allow a single expansion, the argument specification o (‘once’) is available. To use this, you need to know what will happen after exactly one expansion. Functions which may be useful in this way have information about their expansion behaviour included in the documentation, while token list variables also expand to their content after exactly one expansion. Examples using o-type expansion tend to be low-level: perhaps the best example is dropping the first token from some input, so

\tl_set:No \l_tmpa_tl { \use_none:n tokens }
\tl_show:N \l_tmpa_tl

will show ‘okens’.

As with f-type expansion, expanding just once is something of a specialist tool, but one that is needed. It also completes the types of argument specification we can use, so we’re now in a position to do some more serious LaTeX3 programming.

Written by Joseph Wright

April 29th, 2012 at 12:24 pm

Posted in LaTeX3

Tagged with Programming LaTeX3

Text blocks on both sides of the header

with one comment

A while ago, I wrote a short series on creating a CV in LaTeX. I’ve made a few adjustments recently, both to my CV and my letter of application, and one issue came up in both of them: putting text on both sides of the header.

For my CV, I wanted to split the address block I put at the top into two parts: address on one side, phone numbers on the other. The easiest way to do that turns out to be a tabular

\noindent
\begin{tabular}{@{}p{0.5\textwidth}@{}p{0.5\textwidth}@{}}
  \raggedright
  \textbf{Name} \\
  Address Line 1\\
  Address Line 2\\
  ...
  &
  \raggedleft
  \null               \par
  Tel.\ xxx xxxxxx    \\
  Mobile yyy yyy yyyy \\
  someone@some.domain
\end{tabular}

There are a few things to notice here. First, I’ve made the table columns take up all of the width of the page by using @{} to remove any inter-column space, then divided the available space up exactly. I’ve used \raggedleft to push the phone numbers to the right-hand margin, and have forced a blank line at the top of the phone number block so the name comes above everything.

For my letter, I wanted the two blocks again but needed both to be ragged right and with the right-hand one pushed to the right margin. That needs a couple of tables

  \begin{tabular}{@{}p{0.5\textwidth}@{}p{0.5\textwidth}@{}}
    \raggedright
    \toname
    \\
    \toaddress
    \hfil
  &
    \raggedleft
    \begin{tabular}[t]{l@{}}
      \ignorespaces
      \fromaddress
      \\[1 em]%
      \@date
    \end{tabular}
  \end{tabular}

(This is an adjustment of the standard letter class, hence the various storage macros.) What you’ll notice here is that I’ve used a nested tabular purely to get the alignment right: the [t] argument is vital to get both blocks to line up at the top of the page.

Both of these are quite easy once you know how, but it took a while to get them spot-on!

Written by Joseph Wright

April 26th, 2012 at 8:37 am

Posted in LaTeX

arara: Making LaTeX files your way

with one comment

Building a LaTeX source of any complexity means doing more than a single LaTeX run, for example requiring BibTeX or MakeIndex runs along with multiple LaTeX passes. There are several ways to automate this: you can build your own script or use auto-build tools such as latexmk or Rubber. These tools work by checking for changes in the various auxiliary files that LaTeX creates, so they can work out how many runs are needed. However, a lot of users prefer to retain control of building, and so do the various steps by hand.

There is now a tool that leaves the user in control but which helps to automate building: arara by Paulo Cereda. Arara is a Java-based system, which will automatically run the tools you ask it to based on comments in your source. It’s also up to you to set up the tools you want: you can already get quite a selection thanks to Marco Daniel.

How does this work then? In your LaTeX source, you have something like

% arara: pdflatex
% arara: bibtex
% arara: pdflatex
% arara: pdflatex

which as you might guess does the classic pdfLaTeX, BibTeX, pdfLaTeX, pdfLaTeX cycle (assuming you have created rules called pdflatex and bibtex). Life gets a bit more interesting when you start adding options to the different tools. For example, if you want to allow shell escape for just one file, you can do

% arara: pdflatex: { shell : yes }
% arara: bibtex
% arara: pdflatex: { shell : yes }
% arara: pdflatex: { shell : yes }

without needing to leave it on for everything. As you can edit the rules easily, it’s very easy to add specialist options for the way you work, even if no-one else would ever be interested in them. It also makes it easy to run both pdfLaTeX and traditional dvips routes without having to alter the settings in your editor: just add the appropriate arara rules to your files, and the correct route is chosen automatically.

Arara is very much in development at the moment, and that means there are a few rough edges. For example, you have to set up the right bits and pieces yourself: no installer just yet! However, it looks like a great way to have control over exactly what gets run without needing to script everything yourself.

Written by Joseph Wright

April 24th, 2012 at 9:17 pm

Posted in LaTeX

Tagged with arara

biblatex: A team to continue the work

with 5 comments

I posted a while ago about biblatex, looking for news of the author, Philipp Lehman. He’d been very active in replying to bug reports up to about 5 months ago, since when no-one has heard from him. As I said before, that’s a big concern well beyond LaTeX work, but it’s also left a question over continued development of biblatex.

Philipp Lehman had discussed a number of plans with the lead developer of Biber, Philip Kime. Unsurprisingly, Philip Kime has been keen to make sure that development of biblatex continues, but he wanted some help with the styles and any ‘hard-core’ TeX stuff. So a small team of ‘biblatex maintainers’ has been set up:

  • Philip Kime (lead)
  • Audrey Boruvka
  • Me

We’ve got two separate tasks. First, we want to deal with issues with the current release of biblatex (v1.7). These are tracked on the SourceForge site, and there are a few outstanding which are being looked at. Second, we want to continue the work that Philipp Lehman had planned. That work is taking place on GitHub, and there are some big issues to tackle.

Perhaps the biggest single item on the horizon for biblatex 2.0 is dropping BibTeX support, and going Biber-only. That’s something that Philipp Lehman has been planning for some time: the reality is that supporting BibTeX is increasingly awkward, and it’s making adding new features increasingly complex (and bug-prone). Of course, dropping BibTeX support will be a significant change, but it’s been on the horizon for some time, and will open the way to further extending the data model biblatex (and Biber) user.

Of course, if Philipp Lehman does return (and we hope he does), then the ‘team’ will be very happy to hand back to him. For the moment, sticking with the roadmap seems the best way forward.

Written by Joseph Wright

April 23rd, 2012 at 10:04 pm

Posted in biblatex

Programming LaTeX3: Expandability

with one comment

In the last part, I looked at integer expressions, and how they can be used to calculate integer values. What I did not do was say exactly what can go inside an integer expression. That’s because it links in to a wider concept, and one that is very familiar to TeX programmers: expandability.

What is expandability?

To understand expandability, we need to think about what TeX does when we use functions. TeX is a macro expansion language, and as I’ve already said that means that LaTeX3 is too. When we use a function in a place where TeX can execute all of the built-in commands (‘primitives’), we don’t really need to worry about that too much. However, there are places where life is more complicated, as TeX will only execute some of the primitives. These places are ‘expansion contexts’. In these places, only some functions will work as expected, and so it’s important to know what will and will not work.

LaTeX3 and expandability

For traditional TeX programmers, understanding expandability means knowing the rules that TeX applies to decide what can and cannot be expanded. For LaTeX3, life is different as the documentation includes details of what will and will not work. If you read the documentation, you will see that some functions are marked with a star: those are expandable. For the moment, we won’t worry about the non-starred functions, other than to note that we can’t use them when we need expandability.

So sticking with our starting point, integer expressions, if we look at a function like \int_eval:n, this leads to the conclusion that the argument can only contain

  • Numbers, either given directly or stored inside variables
  • The symbols +, -, *, \, ( and )
  • Functions which are marked with a star in the LaTeX3 documentation, plus any arguments these themselves need.

That hopefully makes some sense: \int_eval:n is supposed to produce a number, and so what we put in should make sense for turning into a number. The same idea applies to all of the other integer expression functions we saw last time.

Expansion in our own functions

Integer expression functions make a good example for expandability, but if that was the only area that expansion applied it would not be that significant. However, there are lots of other places where we want to carry out expansion, and this takes us back to the idea of the argument specification for functions. There are three expansion related argument specifiers: f, o and x. Here, I’m going to deal just with x-type expansion, and will talk about f- and o-type expansion next time!

So what is x-type expansion? It’s exhaustive: expanding everything until only non-expandable content remains. We’ve already seen the idea that for example \tl_put_right:NV is related to \tl_put_right:Nn, so that we can access the value of a variable. So it should not be too much of a leap to see a relationship between \tl_set:Nn and \tl_set:Nx. So when we do

\tl_set:Nn \l_tmpa_tl { foo }
\tl_set:Nn \l_tmpb_tl { \l_tmpa_tl }
\tl_set:Nx \l_tmpc_tl { \l_tmpb_tl }
\tl_show:N \l_tmpc_tl

TeX exhaustively expands: it expands \l_tmpb_tl, and finds \l_tmpa_tl, then expands \l_tmpa_tl to foo, then stops as letters are not expandable. Inside an x-type expansion, TeX just keeps going! So we don’t need to know much about the content we are expanding.

With a function such as \int_eval:n, the argument specification is just n, so you might wonder why. The reason is that x-type functions are (almost) always defined as a variant of an n-type parents. So functions that have to expand material (there is no choice) just have an n. (There are also a few things that \int_eval:n will expand that and x-type argument will not, but in general that’s not an issue as it only shows up if you make a mistake.)

Functions that can be expanded

I’ve said that functions that can be expanded fully are marked with a star in the documentation, but how do you make your own functions that can be expanded? It’s not too complicated: a function is expandable if it uses only expandable functions. So if all of the functions you use are marked with a star, then yours would be too.

There is a bit more to this, though. We have two (common) ways of creating new functions

  • \cs_new:Npn
  • \cs_new_protected:Npn

The difference is expandability. Protected functions are not expandable, whereas ones created with \cs_new:Npn should be. So the rule is to use \cs_new_protected:Npn unless you are sure that your function is expandable.

LaTeX2e hackers note: What about \protected@edef?

Experienced TeX hackers will recognise that x-type expansion is build around TeX’s \edef primitive. If you’ve worked a lot with LaTeX2e, you’ll know that with any user input you should use \protected@edef, rather than \edef, so that LaTeX2e robust commands are handled safely.

If you are working with LaTeX2e input, you’ll still need to use \protected@edef to be safe when expanding arbitrary user input. All LaTeX3 functions are either fully-expandable or engine-protected, so don’t need the LaTeX2e mechanism, but of course that is not true for LaTeX2e commands.

Written by Joseph Wright

April 21st, 2012 at 8:02 pm

Posted in LaTeX3

Tagged with Programming LaTeX3

« Older Entries
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.