Coder Social home page Coder Social logo

l3build's Introduction

The expl3 (LaTeX3) Development Repository

Overview

The repository contains development material for expl3. This includes not only code to be developed into the expl3 kernel, but also a variety of test, documentation and more experimental material. All of this code works on top of LaTeX2e.

The following directories are present in the repository:

  • l3kernel: code forms the expl3 kernel and all stable code. With a modern LaTeX2e kernel, this code is loaded during format creation; when using an older LaTeX2e kernel, this material is accessible using the expl3 package.
  • l3backend: code for backend (driver) level interfaces across the expl3 codebase; none of this code has public interfaces, and so no distinction is made between stable and experimental code.
  • l3packages: code which is written to be used on top of LaTeX2e to explore interfaces; this bundle is now made up of historical material, and the concepts have been migrated to the LaTeX2e kernel
  • l3experimental: code which is written to be used on top of LaTeX2e to experiment with code and interface concepts. The interfaces for these packages are still under active discussion. Parts of this code may eventually be migrated to l3kernel.
  • l3trial: material which is under very active development, for potential addition to l3kernel or l3experimental. Material in this directory may include potential replacements for existing modules, where large-scale changes are under-way. This code is not released to CTAN.
  • l3leftovers: code which has been developed in the past by The LaTeX Project but is not suitable for use in its current form. Parts of this code may be used as the basis for new developments in l3kernel or l3experimental over time.

Support material for development is found in:

  • support, which contains files for the automated test suite which are 'local' to the repository.

Documentation is found in:

  • articles: discussion of concepts by team members for publication in TUGBoat or elsewhere.

The repository also contains the directory xpackages. This contain code which is being moved (broadly) l3experimental. Over time, xpackages is expected to be removed from the repository.

Issues

The issue tracker for expl3 is currently located on GitHub.

Build status

We use GitHub Actions as a hosted continuous integration service. For each commit, the build status is tested using the current release of TeX Live.

Current build status: build status

Development team

This code is developed by The LaTeX Project.

Copyright

This README file is copyright 2021-2024 The LaTeX Project.

l3build's People

Contributors

blefloch avatar davidcarlisle avatar dbitouze avatar dffischer avatar frankmittelbach avatar jlaurens avatar josephwright avatar koppor avatar muzimuzhi avatar phelypeoleinik avatar teatimeguest avatar wspr avatar wtsnjp avatar yegor256 avatar zauguin avatar zunbeltz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

l3build's Issues

Formalise which versions of TeX Live/engines/Lua are supported

Currently our normalisation, particularly for LuaTeX, has been built up over a period. We also have features such as epoch support that vary with year (see #7). Probably we should state in the documentation a formal policy and make sure the code is in line with it.

Extend call() to take a third argument for 'options'

This will form part of the solution to #18. The only question is whether the third argument should be 'some text' (replace or add to the existing options), or a table of the command line options to reconstruct. The latter makes it easier to add/remove options at will during calls, the former is faster up-front and avoids needing to know the 'reconstruction' form ...

Consider multiple files for the Lua code?

With l3build.lua at around 2500 lines, I wonder if it might be time to consider a more ‘literate’ approach to the code.

But I seem to recall that originally this was not desired since you'd then end up using l3build to unpack l3build. Do we still want to avoid having to munchhausen ourselves (as Frank would say)?

I guess an alternative might be a pretty-printing module for Lua code, so at least we could navigate the typeset code with some additional structure.

Or I might just be procrastinating from actual work, and the code is fine. Thoughts?

difference between running all tests and running one testfile

This may be related to special code in latex2e build files but then it may be not so I put it here ...

When running

$ texlua build.lua check tlb-rollback-003

in latex2e/base (versioning branch) then the result is

All checks passed

but as one can see Travis thinks otherwise. And indeed, when running all tests, ie

$ texlua build.lua check

no idea why, but probably the diff error codes are not properly proagated if single files are processed?

testing MediaBox and other metadata

Setting up some tests for geometry https://github.com/davidcarlisle/geometry

It would be useful to check that the following does produce a pdf of size 100x200 eg pdfinfo reports

Page size: 100 x 200 pts

This was the bug being addressed (with luatex the page size was not affected, leaving this as A4) so it would be good to have a test for this case.

One possibility would be to provide a custom "normalization" function in build.lua that just runs pdfinfo (or a lua function extracting the same information) other interfaces to this functionality could be imagined.

\documentclass{article}

\usepackage[papersize={100bp,200bp}]{geometry}

\begin{document}
x
\end{document}

fc can't open file on Windows

I use l3build to build a test for my project. Both the save and check procedure work well on Lnux ,but they fail on Windows.

The log is like the following:

Running checks on
  01-internal (1/3)
FC: cannot open ./BUILD/TEST/01-INTERNAL.XETEX.TLG - No such file or folder

FC: cannot open ./BUILD/TEST/01-INTERNAL.LUATEX.TLG - No such file or folder

          --> failed

Then I run (use backslash \)

fc .\BUILD\TEST\01-INTERNAL.XETEX.TLG .\testfiles\01-INTERNAL.XETEX.TLG

I get the correct result. So I guess the problem is from convention between \ and /. I have noticed that there is a unix_to_win function in l3build-check.lua, but why doesn't it work?

Use a table-based approach for targets

As for options with in 35d4231, it would be good if targets were generated from a table. This would allow auto-generation of help and probably avoid needing to have non-standard main() functions (as the table could be public).

Improve approach to multiple sets of tests

Currently we have --testfiledir|-t but this requires each set of non-standard tests be run manually. Better would be if several test 'sets' could be run automatically. The new call() function should allow this with some modifications: probably something like

for _,dir in (<set-of-testfile-dirs>) do
  call(".", "check", <some-over-ride-setting-using-dir>)
end

where the current --testfiledir could be set for each call. The only issue then becomes the naming for the 'set of dirs' (or similar) function.

providing different number of checkruns in a test suite

I just wrote a bunch of tests that require 2 or even 3 checkruns before they get the right result. Thus in the confix I have to set checkruns=3 with the result that if I would put that into a larger suite of tests the processing times goes up drastically (as 99% of the tests would come out right after a single run).

Now on fast machines that is not so much of an issues, but on this rather old desktop, for example, the 2e suite already runs for 20 minutes so I really don't need another 10 or 40 minutes.

Proposal:

checkruns =  <number> | *

If * then do a loop in the check target (up to 3 times maybe) comparing the log against the tlg and rerun if differences. Fail after 3 run. This way the majority of tests would finish after 1 run, and real failures would run 1 or 2 times unnecessarily (but that's the exceptional case).

Of course that also requires a change in the in the save, either by running that always 3 times (if * is set above) or by offering something like

texlua build.lua save --checkruns=2 tlb-foo 

same could be in principle also be offered in check for locally overwriting the config value for testing purposes.

running l3build in the testfiles folder

Currently when saving a test I have to type the full file name as I'm outside the testfolder and tab completion doesn't work. The alternative is to use tab completation to get testfiles\filename and then delete manually the testfiles\. Both is a bit cumbersome (I always forget the names I gave to my tests).

What is the recommended way here? can /should one write a build.lua inside testfiles which points to the parent folders?

Provide a binary?

Logging this here for further discussion or just so I remember; no urgency.

I think there might be some benefit for us to distribute a script called l3build with chmod+x to more easily document (and add some features) that behaved essentially like texlua build.lua but with some extra possibilities.

Benefits:

  • Slightly easier to document how to use l3build, and slightly easier for new users to get started
  • Possibility of l3build init which could generate a local build.lua file (a definite plus from my perspective)
  • Possibility of l3build newproject which could generate a set of template files in a “standard form” that fits our view of best practice (not sure we could ever agree on this but might be a nice idea...)

None of these are show-stoppers, really, but I do think it would be nice for new users to be able to head to their command line and write l3build and have something sensible happen, even if it's just a sensible error message like what current happens when you write texlua build.lua — the main thing being that you wouldn't already need to have a build.lua file in the current directory!

Add verbosity for debugging/development

When debugging it can be extremely useful to print status lines for what's happening. I started some work in this area long ago but obviously got side-tracked.

My only concern is that it adds a degree of cruft to the code; I've added a bunch of conditional "debug" style lines to fontspec, for example, and the code is now a bit ugly to read through. So we'd want to be careful about not overdoing it.

questionable data in travis log

Looking into the travis log after uploading some code to latex2e I saw the following section:

Running script build.lua with target "check" for module doc
usage: build.lua <command> [<options>] [<names>]
The most commonly used l3build commands are:
   check      Run all automated tests
   clean      Clean out directory tree
   ctan       Create CTAN-ready archive
   doc        Typesets all documentation files
   install    Installs files into the local texmf tree
   save       Saves test validation log
   setversion Update version information in sources
   uninstall  Uninstalls files from the local texmf tree
   unpack     Unpacks the source files into the build tree
Valid options are:
   --config|-c        Sets the config(s) used for running tests
   --date             Sets the date to insert into sources
   --dry-run          Dry run for install
   --engine|-e        Sets the engine(s) to use for running test
   --epoch            Sets the epoch for tests and typesetting
   --first            Name of first test to run
   --force|-f         Force tests to run if engine is not set up
   --halt-on-error|-H Stops running tests after the first failure
   --last             Name of last test to run
   --pdf|-p           Check/save PDF files
   --quiet|-q         Suppresses TeX output when unpacking
   --rerun            Skip setup: simply rerun tests
   --shuffle          Shuffle order of tests
   --texmfhome        Location of user texmf tree
   --version|-v       Sets the version to insert into sources
See l3build.pdf for further details.
Running script build.lua with target "check" for module required/cyrillic

might be nicer if the build.lua in the latex2e/doc directory has a "check" target or that the the target is not called for that module or that we find a way to avoid these handwritten build files in the first place.

Strange behavior with check and a single file

Checking a single .lvt file seems to produce a strange error (at least in latex2e/base, I haven't checked elsewhere). Using

$ texlua build.lua check tlb-rollback-001

gives

Running checks on
  tlb-rollback-001 (1/1)
Error: failed to find .pdf, .tlg or .lve file for tlb-rollback-001!

even though the .tlg exists and is found if all tests are run

The error doesn't show when I use

$ texlua build.lua check --rerun tlb-rollback-001

However, in that case the format etc is still regenerated (which I thouhgt is supposed to be omitted if `--rerun`` is used.

Also strange: when using --rerun the luatex tlg shows a new difference:

(size10.clo
luaotfload | db : Font names database loaded from C:/texlive/2017/texmf-var/luatex-cache/generic/names/luaotfload-names.luc(load luc: C:/texlive/2017/texmf-var/luatex-cache/generic/fonts/otl/lmroman10-regular.luc))

The line withluaotfload was not in the file generated earlier with save -e luatex and it would be rather unfortunate to have it

error message handling with luatex (can this be normalized?)

When the test code deliberately produces an error we seem to get different log lines depending on which engine is used. Not sure if this can be normalized (I guess not) but I wanted to log it here, just in case.

Example:

***** ..\BUILD\TEST\tlb-rollback-001.luatex.tlg
   99:  ...                                              
  100:  l. ......Release[v1]{2014-01-01}{testpkg-2014-01-01}
  101:  The package 'testpkg' claims that it came into existence on 2014-01-01 which
***** ..\BUILD\TEST\TLB-ROLLBACK-001.LUATEX.LOG
   99:  ...                                              
  100:  l. ......ldRelease[v1]{2014-01-01}{testpkg-2014-01-01}
  101:  The package 'testpkg' claims that it came into existence on 2014-01-01 which
*****

luatex seems to display 2 more characters from the line on the left.

Rework setversion

Based on @wspr's excellent Lua code in #34, I realise that the current setversion customisation is not ideal. It should deal with a file handle not a line of text (reflects a TeX background!). At the same time, I remain unhappy about using -v for 'versionhere .. I think-tfor--tag` might be better ...

disable temporarly output normalisation during a check

Would it be possible to have an option to either temporarly disable output normalisation of the log-file (I know that the check than fails definitively) or to keep a backup of the original log-file? Sometimes when checks fails (or when I create checks) I would like to see absolute pathes of files and fonts. Compiling in the testfiles folder is not the same, as support files can be different.

Provide a generalised mechanism for passing command line options 'forward'

Currently, the core l3build functions pass on -e, etc. when calling scripts in subdirectories. However, that only works if one uses the standard main() function and so on. To allow customised use of the code, some 'forwarding' function would be good. This would help with our own code as we need a custom top-level script and one in the 2e repo.

ctan zip files may have a dangerous name

Just a minute ago I made a big blunder and (beside my stupidity) I partly blame l3build.
At the moment texlua build.lua ctan generates a zip file in the name of the bundle, e.g., latexbug.zip but if that zip is opened on OSX the OS is automatiallly generating a directory from it, thus you end up with latexbug inside latexbug. This makes it easy to delete the wrong dir or get confused in other ways.

Maybe its better to call these zip files <module>-ctan.zip or something like that

document: \texttt or verbatim

The -- in \texttt{--option} will be typeset with ligature and becomes a single en dash rather than two hyphens. So using verbatim (or other macro e.g. from listings) may be a better choice.

Example:

default

What order do tests run in?

I've noticed that tests seem to run in pretty random (but sometimes repeatable) orders. E.g., my last check -H started out:

Running checks on
  input-fullwidth
  active-sscripts-amsmath
  mathsizes
  operatorname
  lmdefault-mathrm-it-bf

But I've just re-run it and received the same (unordered) order.

I quite like the idea of randomising the tests since it means if you're re-running the test suite a lot to fix up a number of changes, you're more likely to catch failures earlier since you don't need to need to sit through the last N tests that have already passed.

OTOH I would also like to be able to force a certain test (or group of tests) to go first, since if the package doesn't even load without error then there's not much point testing more subtle parts of the code.

Is this anything that has been ‘designed’? If we decide to make changes, any thoughts on whether a fixed or random order would be better?

Use system zip capablities on Windows

Recent Windows allow something like

 powershell Compress-Archive -Path <path> -DestinationPath <zip-file>

Probably we should use this rather than relying on a third-part zip.

examples should include test files

The examples showing different structures should include a test directory ready to run (simple) test files, so that people are reminded that this is useful.

Add a testfiles variable?

I wonder if we should have a variable for the list of test files: currently we simply assume *.lvt. That might work nicely with the new checkconfigs approach: one could have all the test files in one place but select based on the name.

binary (pdf) tests doesn't work: engine specific pdf is missing

my build.lua looks like this

-- Build script for test
module   = "buildbug"

checkengines = {"pdftex"}


if not release_date then
 kpse.set_program_name ("kpsewhich")
 dofile ( kpse.lookup ("l3build.lua"))
end

In the testfiles folder I have this test.lvt

\input{regression-test}
\documentclass{article}

\START
\begin{document}
some text
\end{document}

Running l3build save test and then l3build check works fine, but
l3build save -p test ends with the message Das System kann die angegebene Datei nicht finden. (file no found), and a following check fails as TEST.PDFTEX.PDF is not found:

G:\buildbug>l3build check
Running checks on
  test (1/1)
FC: Kann ./BUILD/TEST/TEST.PDFTEX.PDF nicht öffnen - Datei oder Ordner nicht vorhanden

          --> failed

  Check failed with difference files
  - ./build/test/test.pdftex.cmp

The testfiles folder contains only a test.pdf. The build/test folder contains after the save step a test.pdf and after the check step also a test.ref.pdf.

Proposal: something like a `saveall` target

When working with unicode-math and friends, I find there's a lot of iteration that happens with the test files since I need to test things that can't be easily checked programmatically so there's a lot of box checking. Even with various LaTeX3 testsuites we have situations where we log a fair amount of material in the TLGs themselves which need a bit of work to correct when making certain types of changes to the code.

At the moment, this is done in a fairly tedious way of either:

  • Running check -H, stopping on an error, then running save testXYZ, then going back to check -H which re-runs a lot of unnecessary tests before getting back to where we were. Then iterate.

Or:

  • Running check, getting a list of diffs, cross-checking them and running save testXYZ, save testYZA, ... . Which doesn't sound so bad unless you accidentally lose track of which tests you were supposed to be interrogating. Or if I simply change some internal logic in my package that changes an innocuous part of the log file that's included in each TLG file, I need to manually run save on each test file.

It occurred to me that there's a better way since we're now all using version control. If we had a target saveall, which operated like check but saved TLG files as it went, we could then use the natural diff tools provided by the version control system to see what had changed. It wouldn't be a dangerous operation since the changes aren't permanent unless they're actually committed. Nonetheless I'd still think to implement something like this with a prompt such as "Are you sure? Type "Yes" to proceed."

Personally, I think this would be a huge improvement to the usability of unit testing. But I seem to remember this topic has been brought up before and the idea wasn't popular in the past. Given the inspiration that version control changes the context of how this would be used, are there any objections to me looking into adding a new target to re-save all test flies?

better control over the set of checks

I have a number of testfiles which are rather sensitive towards system changes. They rely e.g. on local files, fonts or give different results when compiled on windows than on linux. e.g. due to path lengths.

That makes it difficult to use services like travis. Instead of trying to adapt the tests so that they work everywhere or to remove tests that don't work I would prefer to run only a restricted set of checks on travis with the help of some configuration file. At best it would allow to suppress checks for a test only for some engines.

support setting epoch on documentation builds.

Currently (on tl 2016 and 2017) we allow the epoch date to be set to produce reproducible PDF for testing,

It would be good to have an option to also set it for documentation.

One possible use for this:

With continuous integration, when an edit of a readme file triggers travis-ci to regenerate everything you don't end up committing a load of spurious changes to the gh-pages
website (well you still commit but nothing happens)

Add support for testing based on uncompressed PDF streams

It is possible to produce uncompressed PDF streams, which allow those versed in PDF to examine the detailed output of a TeX run and to check on aspects that may be difficult/impossible to test or debug from the macro layer/\tracingall. Adding support for this area would allow both testing and debugging abilities to be enhanced.

The interface for this is to be decided, but seems likely to use a marker in the .lvt which tells l3build to read the PDF, followed by manipulation of the latter to extract the 'useful' parts.

Unnecessarily generated formats

When I run texlua build.lua check -e etex filename to save time the system still generates all 3 formats (which is really the main time block). Would be great if that generation woul look at the -e flag and only generate what is needed.

Enable tree copying throughout internals

At present, tree copying support is partial (see 9e1b6be, etc.). This should likely be extended to be consistent across the code. Possibly one or more flags may be needed to denote whether trees should be 'flattened'.

Standardise use fo 'check'/'test'

Currently the variables used for running the check and save targets have a mix of names: some are test, some are check. The latter predominates but testfiles is a very long-standing name for the place holding .lvt files ... we should though look over these and tidy up.

Bad argument #1 to require

When trying to run texlua build.lua on Windows 10 with MikTeX, LuaTeX version 1.07.0 (MikTeX 2.9.6600) I get

C:/Program Files/MiKTeX 2.9/tex/latex/l3build/l3build.lua:46: bad argument #1 to 'require' (string expected, got nil)

According to texlua --credits the Lua version is 5.2.4. print(_VERSION) only says 5.2.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.