Coder Social home page Coder Social logo

turingarena / turingarena Goto Github PK

View Code? Open in Web Editor NEW
15.0 15.0 9.0 15.53 MB

A collection of libraries and tools to create programming challenges and competitions

Home Page: https://turingarena.org

License: Mozilla Public License 2.0

JavaScript 1.27% CSS 0.22% TypeScript 95.98% HTML 0.30% Shell 1.51% Dockerfile 0.28% Makefile 0.46%
rust

turingarena's Introduction

Turingarena

CI testing

A collection of libraries and tools to create programming challenges and competitions.

Getting started

  1. Make sure to have (a recent version of) Node and NPM installed and in PATH.
  2. Make sure to have installed tmux
  3. To install dependencies, run:
    ( cd server/ ; npm ci )
    ( cd web/ ; npm ci )

Possible issue

On Ubuntu 18.04 (and maybe other older version) the NPM could not be updated to the latest version available with a default installation. This could make the previous code to not work because it is not recognizing the comand npm ci. To fix this problem you need to upgrade to a recent version of NPM running:

    curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -
    sudo apt-get install -y nodejs
  1. Import the example contest with:
    ( cd server/ ; npm run cli -- import ../examples/example-contest/ )
  1. TODO: running the server in production

Using Docker

You can run this application with Docker, to have a system ready to use, that you can also use on macOS or Windows.

  1. Build the Docker container (at this point we don't provide prebuilt ones)
docker build . -t turingarena:turingarena

It will probably take a few minutes, so go to drink a cofee while the system build everything.

  1. Start the server like this
docker run --privileged -it -p 3000:3000 -v $PWD/server:/data turingarena:turingarena serve

Of course change the port or the working directory (/data) as you wish. It's important to use the --privileged option, otherwise the sandbox will not work. You may need root privileges on your system to use that.

turingarena's People

Contributors

alerighi avatar cairomassimo avatar dottapaperella avatar edomora97 avatar guilucand avatar irishwhiskey avatar wil93 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

turingarena's Issues

Small typo in the daemon's exec file

While typing sudo turingarenad I stumbled upon this error:

Traceback (most recent call last):
  File "/usr/local/bin/turingarenad", line 7, in <module>
    from turingarena_daemon import turingarena_daemon
ImportError: No module named 'turingarena_daemon'

I looked at the file and the 7th line says
from turingarena_daemon import turingarena_daemon

I tried some combinations and it seemed to work with the following line:
from turingarena_cli import turingarena_daemon

It's probably just a small typo, just wanted to let you know!

Avoid using shell commands to specify evaluators

The current approach of specifying the evaluator with a shell command is not effective for evaluators that need to be built before launching (e.g., C++) and/or have a preferred way to be launched (e.g., Python, using the -u command line option). The evaluator of a problem should instead be specified with a structure.
Possible types of evaluators:

  • python. Runs a python script.
  • make. Makes a target executable using GNU make, then runs it (covers C++ and many other languages by configuring the default Makefile properly, and can be customized by providing a Makefile in the same directory)
  • shell. Runs a shell command.

Since specifying an evaluator becomes more involved after this change, their definition should be part of the problem directory. Still, a single string should be sufficient to identify the evaluator (hence, the problem) given the problem directory, to be used in API calls. Further design is required.

Variable gives error in the interface's for loop

I was making this problem: Given a number n, return all its divisors. This is the code that I wrote (in order the interface, the evaluator and the solution):

function get_number_of_divisors(num);
procedure get_divisors(num);
function get_divisor(i);

main {

	read num;

	call n = get_number_of_divisors(num);
	call get_divisors(num);

	for i to n {
		call ans = get_divisor(i);
		write ans;
	}

}
import random
from turingarena import *

for _ in range(10):
	num = random.randint(0, 250)

	try:
		with run_algorithm(submission.source) as process:
			num_div = process.functions.get_number_of_divisors(num)
			process.procedures.get_divisors(num)
			divisors = [process.functions.get_divisor(i) for i in range(num_div)]

	# evaluating
	
	except AlgorithmError as e:
		pass
int *divisors;

int get_number_of_divisors(int num) {

	int number_of_divisors = 1;

	for(int i = 1; i <= num/2; i++) {
		if(num%i == 0) number_of_divisors++;
	}

	return number_of_divisors;

}

void get_divisors(int num) {
	divisors = new int[get_number_of_divisors(num)];
	
	int divisors_index = 0;
	for(int i = 1; i <= num; i++) {

		if (num%i == 0) {
			divisors[divisors_index] = i;
			divisors_index++;
		}

	}

}

int get_divisor(int i) {
	return divisors[i];
}

It gave this error:

Traceback (most recent call last):
  File "evaluator.py", line 17, in <module>
    num_div = process.functions.get_number_of_divisors(num)
  File "/usr/local/turingarena/libraries/python3/turingarena/driver/proxy.py", line 17, in method
    return self._engine.call(request)
  File "/usr/local/turingarena/libraries/python3/turingarena/driver/engine.py", line 43, in call
    return self.get_response_value()
  File "/usr/local/turingarena/libraries/python3/turingarena/driver/engine.py", line 55, in get_response_value
    return int(self.get_response_line())
  File "/usr/local/turingarena/libraries/python3/turingarena/driver/engine.py", line 50, in get_response_line
    assert line
AssertionError
2018-07-04 14:27:36,375    ERROR [  191] turingarena_impl.metaserver: Exception in child server.
Traceback (most recent call last):
  File "/usr/local/turingarena/backend/turingarena_impl/metaserver.py", line 91, in child_finalizer
    yield
  File "/usr/lib/python3.6/contextlib.py", line 365, in __exit__
    if cb(*exc_details):
  File "/usr/lib/python3.6/contextlib.py", line 284, in _exit_wrapper
    return cm_exit(cm, *exc_details)
  File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__
    next(self.gen)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/server.py", line 41, in run_child_server
    server.run()
  File "/usr/local/turingarena/backend/turingarena_impl/driver/server.py", line 105, in run
    self.interface.run_driver(context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/interface.py", line 69, in run_driver
    self.main_node.driver_run(context=context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/nodes.py", line 53, in driver_run
    result = self._driver_run(context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/block.py", line 103, in _driver_run
    result = result.merge(n.driver_run(context.extend(result)))
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/nodes.py", line 53, in driver_run
    result = self._driver_run(context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/step.py", line 27, in _driver_run
    phase=phase,
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/step.py", line 35, in _run_children
    result = result.merge(n.driver_run(context.extend(result)))
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/nodes.py", line 53, in driver_run
    result = self._driver_run(context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/statements/call.py", line 208, in _driver_run
    return_value = self.statement.return_value.evaluate(context.bindings)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/expressions.py", line 120, in evaluate
    return bindings[self.reference]
KeyError: Reference(variable=Variable(name='n', dimensions=0), index_count=0)
Traceback (most recent call last):
  File "/usr/local/turingarena/backend/turingarena_impl/driver/languages/cpp/sandbox.py", line 35, in <module>

To me, this seems like the only way to write this specific problem, since you need to know how many divisors num has in order to return a correct array. There might be a way around this with callbacks, but I haven't tried yet.
The code seems fine to me, the evaluator makes the exact same steps as the interface. For some reason in this problem the for loop gives this error, even if in previous problems I've implemented the loop with almost exactly the same functions. I could post said problem if needed.

Avoid including sandbox startup time when measuring time usage

For example, let the sandbox announce when it is ready, so that we can subtract the initialization time.
One solution could be to make the sandbox send a SIGSTOP to itself (if possible) so that we can wait(..., WUNTRACED) for it from outside.
At the moment, a call to sleep is made to ensure the sandbox is settled before measuring time.

Python sandbox: allow importing some modules

E.g., heapq and collections.

The set of allowed modules could even be configurable. The configuration could be done either:

  • during the creation of the Algorithm instance, passing the options to the sandbox manager
  • by adding an extra (optional) configuration file, similar to interface.txt, for specifying sandbox parameters

The second option seems better, as it does not require extra implementation effort on the problem side.

Let problems read parameters (e.g., MAXN) from a TOML file

The same file could be used

  • by the evaluator (e.g., when generating input)
  • by the statement generator (e.g., to interpolate Assumptions: n <= ${parameters.MAXN})
  • by the template generator, to inject the values as constants in the template (e.g., const int MAXN = 1000)

Variable "not written" in double for loop.

I was trying to use 2D arrays in TuringArena, so to try them out I made a sort. Since my original attempt did not work, I decided to copy and paste the sort problem from TuringArena's examples and tweak it a bit to allow it to use 2D arrays. The final code looked like this (I pointed out in a comment what I did and did not change):
interface.txt

procedure sort(n, m, a[][]); // put double index in both functions
function get_element(i, j);

main {
    read n, m;

    for i to n {
	for j to m { // added for loop and second inxed
            read a[i][j];
	}
    }

    call sort(n, m, a);
    
    for i to n {
	for j to m { // same as before
            call ans = get_element(i, j);
            write ans;
	}
    }
}

correct.py

def sort(n, m, a):
    global b
    b = []
    for i in range(len(a)): # needed to "double sort" since this is a 2D array
        b.append(sorted(a[i]))
    return sorted(b)


def get_element(i, j): # second index
    return b[i][j]

evaluator.py

import random
from turingarena import *

# made this function to use in the if later instead of the sorted function
def sort_matrix(matrix_to_sort):
	b = []
	for i in range(len(matrix_to_sort)):
		b.append(sorted(matrix_to_sort[i]))
	return sorted(b)

all_passed = True
for _ in range(10):
    n, m = 4, 4
    a = [[random.randint(0, 9) for j in range(m)] for i in range(n)] # made a a 2D array
    b = []
    try:
        with run_algorithm(submission.source) as process:
            process.procedures.sort(n, m, a)
            b = [[process.functions.get_element(i, j) for j in range(m)] for i in range(n)] # double for
    except AlgorithmError as e:
        print(e)
        all_passed = False
    if b == sort_matrix(a):
        print("correct!")
    else:
        print("WRONG!")
        all_passed = False

evaluation.data(dict(goals=dict(correct=all_passed)))

Evaluating all of this gives me this error:

Traceback (most recent call last):
  File "evaluator.py", line 19, in <module>
    b = [[process.functions.get_element(i, j) for j in range(m)] for i in range(n)]
  File "evaluator.py", line 19, in <listcomp>
    b = [[process.functions.get_element(i, j) for j in range(m)] for i in range(n)]
  File "evaluator.py", line 19, in <listcomp>
    b = [[process.functions.get_element(i, j) for j in range(m)] for i in range(n)]
  File "/usr/local/turingarena/libraries/python3/turingarena/driver/proxy.py", line 17, in method
    return self._engine.call(request)
  File "/usr/local/turingarena/libraries/python3/turingarena/driver/engine.py", line 43, in call
    return self.get_response_value()
  File "/usr/local/turingarena/libraries/python3/turingarena/driver/engine.py", line 55, in get_response_value
    return int(self.get_response_line())
  File "/usr/local/turingarena/libraries/python3/turingarena/driver/engine.py", line 50, in get_response_line
    assert line
AssertionError
2018-07-14 20:34:24,261    ERROR [   35] turingarena_impl.metaserver: Exception in child server.
Traceback (most recent call last):
  File "/usr/local/turingarena/backend/turingarena_impl/metaserver.py", line 91, in child_finalizer
    yield
  File "/usr/lib/python3.6/contextlib.py", line 365, in __exit__
    if cb(*exc_details):
  File "/usr/lib/python3.6/contextlib.py", line 284, in _exit_wrapper
    return cm_exit(cm, *exc_details)
  File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__
    next(self.gen)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/server.py", line 41, in run_child_server
    server.run()
  File "/usr/local/turingarena/backend/turingarena_impl/driver/server.py", line 105, in run
    self.interface.run_driver(context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/interface.py", line 69, in run_driver
    self.main_node.driver_run(context=context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/nodes.py", line 53, in driver_run
    result = self._driver_run(context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/block.py", line 103, in _driver_run
    result = result.merge(n.driver_run(context.extend(result)))
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/nodes.py", line 53, in driver_run
    result = self._driver_run(context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/step.py", line 27, in _driver_run
    phase=phase,
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/step.py", line 35, in _run_children
    result = result.merge(n.driver_run(context.extend(result)))
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/nodes.py", line 53, in driver_run
    result = self._driver_run(context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/statements/for_loop.py", line 95, in _driver_run
    for i in range(for_range)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/statements/for_loop.py", line 95, in <listcomp>
    for i in range(for_range)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/nodes.py", line 53, in driver_run
    result = self._driver_run(context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/block.py", line 103, in _driver_run
    result = result.merge(n.driver_run(context.extend(result)))
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/nodes.py", line 53, in driver_run
    result = self._driver_run(context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/statements/for_loop.py", line 95, in _driver_run
    for i in range(for_range)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/statements/for_loop.py", line 95, in <listcomp>
    for i in range(for_range)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/nodes.py", line 53, in driver_run
    result = self._driver_run(context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/block.py", line 103, in _driver_run
    result = result.merge(n.driver_run(context.extend(result)))
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/nodes.py", line 53, in driver_run
    result = self._driver_run(context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/step.py", line 22, in _driver_run
    return self._run_children(context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/step.py", line 35, in _run_children
    result = result.merge(n.driver_run(context.extend(result)))
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/nodes.py", line 53, in driver_run
    result = self._driver_run(context)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/statements/call.py", line 208, in _driver_run
    return_value = self.statement.return_value.evaluate(context.bindings)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/expressions.py", line 120, in evaluate
    return bindings[self.reference]
KeyError: Reference(variable=Variable(name='ans', dimensions=0), index_count=0)Connection to localhost closed.

It gives an error with the process.functions.get_element(i, j) part but I don't get the final error. I can usually solve the Reference(variable=Variable(name='ans', dimensions=0), index_count=0) by writing the variable in the interface, but here it's clearly written

Consider dropping support for global variables

Global variables functionality can always be simulated by an extra function call at the beginning.

Pros of global variables:

  • avoid an extra function to be defined
  • avoid ugly statements like ::N = N in solutions, only to copy a parameter to global scope
  • having global variables may be easier for the solution writer

Cons:

  • need to keep the supporting code in the core
  • complicate the API (they are not positional, but passed by key, etc.)
  • in languages like Python, the sandbox is required to run the solution code only after executing init (and in general, the translation of the init block is non-trivial)
  • the solution is forced to use the provided data structures (i.e., plain arrays) to represent the data in the global scope (if they use their own, they lose any advantage of having our global variables in the first place)
  • the problem writer has to choose between two equivalent ways of defining the interface (global variables vs an extra function)
  • an extra initialization function is often needed anyways, so that the solutions can pre-process the global variables as they wish
  • global variables preclude the possibility to use a different translation for parameters and local variables (say, local variables are not exposed to solutions, so they could always be translated as ints, and be converted on-demand to type-safe data types only when passed to functions - however, there are also the callback parameters/return value that have to be taken into account...)

Cannot mkdir db.git: Permission denied

I tried to launch the turingarena skeleton command and gave exit status error 128. Below the full error:

Sending work dir: /home/sphero/code/turingarenaExamples (current dir: sat_to_3sat/callbacks)...
fatal: cannot mkdir db.git: Permission denied
Traceback (most recent call last):
  File "/usr/local/bin/turingarena", line 11, in <module>
    sys.exit(turingarena_cli())
  File "/usr/local/lib/python3.5/dist-packages/turingarena_cli.py", line 95, in turingarena_cli
    "git init --bare --quiet db.git",
  File "/usr/lib/python3.5/subprocess.py", line 581, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['ssh', '-o', 'BatchMode=yes', '-o', 'LogLevel=error', '-o', 'UserKnownHostsFile=/dev/null', '-o', 'StrictHostKeyChecking=no', '-p', '20122', 'turingarena@localhost', 'git init --bare --quiet db.git']' returned non-zero exit status 128

This error appears also when i evaluate a submission with turingarena evaluate [path/to/sub] command.

I'm using python 3.5.2v and turingarena/turingarena:latest docker image.

Add a console script to run the TuringArena daemon

The script should be installed together with the CLI client.
It should execute a docker run of the TuringArena image with a tag corresponding to the version of the CLI client itself (use semver?).
Example

docker run [...options...] turingarena/turingarena:server-v2 [...args...]

It could also check if there is an updated version of the Docker image (i.e., a new minor version under the same tag), suggesting how to update it (i.e., docker pull ...).

Pytestplugin: avoid to compile the *~ and *.bak solution files

==================================== ERRORS ====================================
______________________________ ERROR collecting  _______________________________
/usr/local/lib/python3.6/site-packages/pluggy/__init__.py:617: in __call__
    return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/__init__.py:222: in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
/usr/local/lib/python3.6/site-packages/pluggy/__init__.py:216: in <lambda>
    firstresult=hook.spec_opts.get('firstresult'),
/usr/local/turingarena/turingarena/pytestplugin.py:69: in pytest_collect_file
    interface=problem.interface,
/usr/local/turingarena/turingarena/problem/problem.py:139: in load_source_file
    language = Language(ext)
/usr/local/turingarena/turingarena/sandbox/languages/language.py:21: in __new__
    return cls.from_extension(args[0][1:])
/usr/local/turingarena/turingarena/sandbox/languages/language.py:51: in from_extension
    raise RuntimeError(f"no language with extension{extension} is supported by TuringArena")
E   RuntimeError: no language with extensioncpp~ is supported by TuringArena

Add examples of problems

Comment this issue to add proposals of problem examples.

  • vertex cover on trees
  • weighted vertex cover on trees

Change APIs based on unix pipes so they never block the server

With the current implementation, when the client (usually a problem evaluator) crashes or is killed by Ctrl-C, the server (e.g., the sandbox server) may block indefinitely (i.e., needs to be killed with a second Ctrl-C).
To solve this, we need to make sure that, as long as the server holds any resource on behalf of the client process, the two processes keep an open pipe between them. This is to ensure that, if the client process dies at any time (actually, if all of its children sharing the open fd die), then the server can detect that the file has been closed and can release the resources.

To implement the server side, we may use blocking operations within (daemon) threads, but in principle it should be possible to implement everything in a single thread using poll.
(see, e.g., https://stackoverflow.com/questions/15055065/o-rdwr-on-named-pipes-with-poll#answer-17384067)

Synchronous queues

Synchronous queues are a fully synchronous mechanism for message passing.
Listening on a synchronous queue is a blocking operation. For this reason, the mere listening over a synchronous queue is a "resource" held by the server, and it must be controlled by some open pipe.
No further control mechanism is needed.

A message sent over the queue is segmented into different parts, which are sent or received over different pipes (called request/response payload pipes). For each payload pipe, the client and the server make a (blocking) open, a write/read and a close. They must be opened in a predefined order, agreed upon by client and server.
This mechanism can easily deadlock if, say, the client forgets to write on a payload pipe, or does it in the wrong order. However, this is not a problem for the purpose of this issue, since, if the client dies, then there is another mechanism that notifies the server.

Channels

Channels are a collection of two or more pipes used to send parallel streams of data. They are more efficient than synchronous queues because that do not require synchronization, but they lack explicit message boundaries.
Opening a channel consists of multiple blocking operations, so again this must be controlled by an external single open pipe. The same argument about deadlocks applies also in this case.

Servers and meta-servers

A meta-server listens on a synchronous queue and, for each request, spawns a new thread/process which, in turn, is some kind of child server. The running child server is a kind of resource: each new server must be associated with an open pipe and killed when this pipe is closed.

Proposed solutions

One solution would be to change the implementation of meta-server as follows.
When a client needs a server to be spawned, it must first open a "connection" pipe, and keep it open as long as it is communicating with the server. Then, it can send its request over the synchronous queue.
When the "connection" pipe is closed, the server thread can be killed and all the resources released.

Comments in the generated skeleton to help the newcomer

Taking as example the skeleton generated for problem ping_pong, I would add (automatically generated) comments to help the newcomer getting into and possibly build up her own debugging tools and experiments:

#include
#include
void pong() {
printf("%d\n", 1); // say that you are a posing a callback request
printf("%d\n", 0); // specify which callback function (0 = pong)
exit(0);
}
void ping();

int main() {
ping();
printf("%d\n", 0); // say that you have finished
}

Callback generated code: avoid using strings in I/O

Instead of writing the callback name, or return when there are no more callbacks, do the following:

  • in callbacks, write 1 followed (on the next line) by the index of the callback as integer
  • when the function returns (no more callbacks), output 0

With this change, we can stick to use only integers for I/O. (In the future, we may even use a more efficient serialization format.)

pytestplugin is not installed in the docker image

Test session starts (platform: linux, Python 3.6.3, pytest 3.5.0, pytest-sugar 0.9.1)
rootdir: /tmp/tmpg9_mqx49, inifile:
plugins: xdist-1.22.2, sugar-0.9.1, forked-0.2

turingarena is missing from the plugins list

Use a single git tree hash to identify a pack

Merging two or more trees can be considered a type of "repository". This requires repositories to be associated with the tree they contain, so that dependencies of "merge" repositories are fetched before the merge itself.
Other types of operations (say, applying a patch) can be added similarly.

Check that process.section boundaries correspond to flush instructions

The process.section() context manager allows to get and/or limit the resource usage of a section of an algorithm. Since the only synchronization with the sandboxed process occurs on I/O, the resource usage can be accurately reported only if the section starts and ends at the moment the output is flushed (either with a flush, or at the end of the execution).
This can be checked at runtime.

Create a pytest plugin to better show progress in execution of tests

In our use case, the tests are usually low in number but each takes a long time to run.
For this reason, it would be nice to have a graphical view showing:

  • which tests are currently running (with pytest-xdist there can be more running in parallel)
  • how long each test is taking
  • a live view of the logs

which can be implemented as a pytest plugin replacing terminalreporter (see for example https://github.com/Frozenball/pytest-sugar)
This plugin could take the whole terminal while the tests are running (like an ncurses app). Then, when the test finish or are interrupted, it could show a standard report (i.e., based on the original code of pytest) on the terminal.

Move deploy scripts to Docker

Currently, the deployment to cloud (Amazon AWS and HyperSH) is performed by a quite involved .travis.yml file, which make it not testable locally.

Instead, the logic to build and deploy should be implemented as a (Python) script runnable inside the Docker image. I.e., the deployment done via Travis should consist only of the following steps:

  1. build the turingarena-base Docker image,
  2. build the turingarena Docker image,
  3. run several commands using the turingarena Docker image, passing along all the environment variables needed for configuration.

If statement doesn't work in interface

I was testing the if statement as it was written in the guide interface.md. I tried to do a simple subtraction problem with the following code:

function subtract(a, b);

main {

	read a, b;
	if a > b {
		call c = subtract(a, b);
	} else {
		call c = subtract(b, a);
	}
	write c;

}

But it gave the following error as I tried to execute the turingarena template command:

Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "_main_", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/turingarena/backend/turingarena_impl/_main_.py", line 4, in <module>
    server_cli()
  File "/usr/local/turingarena/backend/turingarena_impl/cli.py", line 16, in wrapped
    fun(args, **kwargs)
  File "/usr/local/turingarena/backend/turingarena_impl/server_cli.py", line 69, in server_cli
    commands[args["<cmd>"]](argv2)
  File "/usr/local/turingarena/backend/turingarena_impl/cli.py", line 16, in wrapped
    fun(args, **kwargs)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/cli.py", line 23, in generate_template_cli
    interface = InterfaceDefinition.compile(interface_text)
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/interface.py", line 40, in compile
    for msg in interface.validate():
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/interface.py", line 29, in validate
    yield from self.main_block.validate()
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/block.py", line 41, in validate
    yield from statement.validate()
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/statements/if_else.py", line 30, in validate
    yield from self.condition.validate()
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/statements/if_else.py", line 16, in condition
    return Expression.compile(self.ast.condition, self.context.expression())
  File "/usr/local/turingarena/backend/turingarena_impl/driver/interface/expressions.py", line 21, in compile
    return expression_classes[ast.expression_type](ast, context)
  File "/usr/lib/python3.6/site-packages/bidict/base.py", line 410, in __getitem_
    return self._fwdm[key]
KeyError: 'comparison'

Before that I executed the sudo docker pull turingarena/turingarena and sudo pip3 install -U turingarena-cli commands, so I should have the most recent version of TuringArena

Sandbox: use /proc/<pid>/... to get information about the *memory* usage

We switched to use the SIGSTOP signal and wait4 for measuring a process resource usage. This is inevitable since measuring the CPU time up to microseconds only makes sense when the process is stopped.

However, the report provided by wait4 is quite limited for memory usage (i.e., only the maximum resident set size is provided).
By switching back to use the /proc filesystem, more accurate information about memory usage can be obtained, while keeping wait4 to get CPU time usage.

An interesting feature: since Linux 4.0, it is possible to reset the value of MaxRSS of a running process, by writing 5 to /proc/[pid]/clear_refs (see man proc).

Can't import random in python solutions

For testing, I put it as the first line of solution.py in sum_of_two_numbers and this popped up:

invalid literal for int() with base 10: '0.052995999999999995'Traceback (most recent call last):
  File "/usr/local/turingarena/backend/turingarena_impl/driver/languages/python/sandbox.py", line 45, in <module>
  File "/usr/local/turingarena/backend/turingarena_impl/driver/languages/python/sandbox.py", line 40, in main
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'random'

Which is pretty weird since random is a preinstalled module

Reduce overhead in problem-driver communication

Currently, the communication between problem and driver is fully synchronous per function call. This introduces a latency on the order of 10-100 us per function call, which results in an unacceptable 10-100s latency for a problem with a million calls.

The solution is to use a pipe channel instead of a synchronous queue, and only flush the data once per communication block.
The channel has 4 pipes:

  • (data flows from problem to driver)
    • input request pipe
  • (data flows from driver to problem)
    • input error pipe
    • output request pipe
    • response pipe

Protocol:

  • for every function call which does not have any output (return value and/or callbacks), the problem writes the corresponding request on the input request pipe
  • the first time a function call with output is issued, the problem writes this request on the input request pipe, and closes the input request pipe
  • the problem opens and reads fully the input error pipe, to check that no error is reported by the driver in this phase
  • at this point, we assume that the driver has sent all the input to the solution, and has also received all the output from the solution for the current communication block, meaning that the sequence of the remaining function calls for this communication block is fully determined
  • for the first function call with output, the problem reads the output of the request on the response pipe
  • for all the successive function calls, the problem reads the request from the output request pipe, and checks that it matches with the request it is doing, then it reads the response on the response pipe
  • when the problem reaches EOF on the output request pipe, the communication block is finished and the whole process can be repeated again from the beginning

This is done in such a way that by concatenating the input request pipe and the output request pipe we get the full stream of requests, without repetitions. For each of these requests, we can find the corresponding response on the response pipe, where requests with no output have an empty (zero bytes) response (even if they are performed in the output phase).

Show stdout/stderr of driver in tests

Previously, with pytest -s you was able to see the standard output and standard error of the driver.

Since the driver is started in a separate process, now stdout and stderr of the driver are not captured by pytest and thus if an exception occurs in the driver you don't see it, you only get an exception on the child server that tells you that the driver died (which is essentially useless)

Thus you can't know what is the error, and thus this way tests are useless.

If/Switch condition not resolved when needing to break from a loop

Suppose you have an interface.txt like this:

procedure f();

main {
    loop {
        read c;
        if c {
            call f();
        } else {
            break;
        }
    }
}

When you call p.procedures.f() the if condition is correctly resolved, but when you call p.exit() (also implicitly in python when exit the context manager) the switch condition is not resolved, while the correct behavior would be that it is resolved to 0 (and then you exit from the loop and the program terminates).

For now a work around of this is to define a procedure terminate() that doesn't to nothing and call it like this:

procedure f();
procedure terminate();

main {
    loop {
        read c;
        if c {
            call f();
        } else {
            call terminate();
            break;
        }
    }
}

To exit the program now you can do: p.procedures.terminate() and of course the if condition is correctly resolved and the program terminates. Of course that is not ideal.

For reference, take a look at the example problem tic_tac_toe.

Add a method to synchronize with algorithm on checkpoints

Currently, the problem has no way of knowing that there is a checkpoint instruction in the interface.
Hence, it cannot wait for the checkpoint to be actually reached in the solution code.
Add a method to explicitly make a "checkpoint request", which flushed the data and waits for a response.

Add a CLI wrapper that shows/runs the command using docker

This would be a nice workflow...

$ pip install turingarena
$ turingarena evaluate path/of/solution.cpp
Running command:
sudo docker run --rm --network none -v $PWD:/cwd:ro turingarena evaluate path/of/solution.cpp
[...]

To do this, the internal CLI must be changed from turingarena to something else to avoid confusion (say, python -m turingarena.cli)

Improve efficiency of driver: generate, compile and execute interface driver code

The driver is currently several orders of magnitude slower than the sandboxed process.
To achieve an acceptable performance, the only option is to generate the driver code from the interface, and then execute it.

Notice that this generate code is not to be exposed to the outside, but it is just an implementation detail. So it can easily communicate with internal (non-generated) code, and should be much easier to realize.

turingarena command -h

Attualmente, tolte stringhe non pertinenti a questo issue, funziona così:

turingarena validate -h
Validate interface file

Usage:
    validate [options]

Options:
    -I --interface=<file>  Interface definition file [default: interface.txt].

Mentre lo usage corretto dovrebbe essere:

Usage:
    turingarena validate [options]

Lo stesso vale anche per gli altri comandi, come template --> turingarena template

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.