Coder Social home page Coder Social logo

utf's People

Contributors

aaaaalbert avatar asm582 avatar choksi81 avatar justincappos avatar linkleonard avatar lukpueh avatar monzum avatar priyam3nidhi avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

utf's Issues

Unit test timeout fail after r6484

All other unit test passed except timeout after r6484:

Running: ut_nm_validate_connection_timeout.py               [ PASS ]
Running: ut_nm_listfilesinvessel_clean_state_add.py         [ PASS ]
Running: ut_nm_changeadvertise_invalidsequenceidnegative.py [ PASS ]
Running: ut_nm_changeadvertise_invalidsequenceid.py         [ PASS ]
Running: ut_nm_changeownerinfo_longstring.py                [ PASS ]
Running: ut_nm_changeowner_expired.py                       [ PASS ]
Running: ut_nm_private_retrievefile.py                      [ PASS ]
Running: ut_nm_listfilesinvessel_add_and_remove.py          [ PASS ]
Running: ut_nm_addfiletovessel_emptyfilename.py             [ PASS ]
Running: ut_nm_changeowner_validsequenceid.py               [ PASS ]
Running: ut_nm_startstopvessel.py                           [ PASS ]
Running: ut_nm_setrestrictions.py                           [ PASS ]
Running: ut_nm_changeownerinfo.py                           [ PASS ]
Running: ut_nm_timeout2.py                                  [ PASS ]
Running: ut_nm_addfiletovessel_invalidfilename.py           [ PASS ]
Running: ut_nm_changeadvertise_invalidsequenceidstring.py   [ PASS ]
Running: ut_nm_addfiletovessel_parentdir.py                 [ PASS ]
Running: ut_nm_addfiletovessel_emptyfile.py                 [ PASS ]
Running: ut_nm_deletefileinvessel.py                        [ PASS ]
Running: ut_nm_rawsaynonexistentmethod.py                   [ PASS ]
Running: ut_nm_private_deletefile.py                        [ PASS ]
Running: ut_nm_changeusers_invalidActions.py                [ PASS ]
Running: ut_nm_private_listfiles.py                         [ PASS ]
Running: ut_nm_fastclient.py                                [ PASS ]
Running: ut_nm_addfiletovessel_duplicate_filename.py        [ PASS ]
Running: ut_nm_changeadvertise.py                           [ PASS ]
Running: ut_nm_changeownerinfo_emptystring.py               [ PASS ]
Running: ut_nm_changeusers.py                               [ PASS ]
Running: ut_nm_resetvessel_multireset.py                    [ PASS ]
Running: ut_nm_private_resetvessel.py                       [ PASS ]
Running: ut_nm_signedsayemptymethodname.py                  [ PASS ]
Running: ut_nm_timeout.py                                   [ FAIL ]

Standard out :
..............................Produced..............................
Did not timeout!

..............................Expected..............................

None

Running: ut_nm_getresources.py                              [ PASS ]
Running: ut_nm_changeowner_invalidkey.py                    [ PASS ]
Running: ut_nm_validate_connection_limit.py                 [ PASS ]
Running: ut_nm_changeownerinfo_unicode.py                   [ PASS ]
Running: ut_nm_changeownerinfo_escapecharacters.py          [ PASS ]
Running: ut_nm_resetvessel.py                               [ PASS ]
Running: ut_nm_signedsaynonexistentmethod.py                [ PASS ]
Running: ut_nm_simple.py                                    [ PASS ]
Running: ut_nm_joinsplitvessels.py                          [ PASS ]
Running: ut_nm_changeowner.py                               [ PASS ]
Running: ut_nm_rawsayemptymethodname.py                     [ PASS ]
Running: ut_nm_readvessellog.py                             [ PASS ]
Running: ut_nm_getoffcut.py                                 [ PASS ]
Running: ut_nm_addfiletovessel.py                           [ PASS ]
Running: ut_nm_listfilesinvessel.py                         [ PASS ]
Running: ut_nm_private_addfile.py                           [ PASS ]
Running: ut_nm_listfilesinvessel_no_files.py                [ PASS ]

Now stopping subprocess: ut_nm_subprocess.py

Some repy tests seems to fail on windows xp.

Some repy tests produce a 'fail' notification on windows machine because the test result expects a return value of '' when None is produced instead. The noted tests are:

ut_repytests_randomratetest.py
ut_repytests_testmemoryallocwithexceptions2.py
ut_repytests_testmemoryquota.py

continuous build nodemanager tests fail more since UTF integration

The nodemanager tests of the continuous build fail more since the UTF was added.

http://blackbox.cs.washington.edu/~continuousbuild/

My impression from the logs there is that it's probably just the tests running before the nodemanager has fully started. A simple sleep or a loop-with-sleep while checking on the nodemanager's progress starting up is probably needed in the startup script for those tests. I think this used to be done, but maybe it got lost in the migration to utf.

Unwanted unit test exception for python 2.7

The following error is often raised when the unit tests are run on Python 2.7. It has been around for a while and should be fixed as it makes it hard to run unit tests on the latest Python.

monzum@TestUbuntu:~/exdisk/work/repy_v1_client$ python2 utf.py -Tm nm
Testing module: nm
    Running: ut_nm_addfiletovessel.py                           [ FAIL ] [ 28.45s ]
--------------------------------------------------------------------------------
Standard error :
..............................Produced..............................
Exception safety_exceptions.RunBuiltinException: RunBuiltinException() in <bound method Popen.__del__ of <subprocess.Popen object at 0x9b9754c>> ignored
Exception safety_exceptions.RunBuiltinException: RunBuiltinException() in <bound method Popen.__del__ of <subprocess.Popen object at 0x9b9754c>> ignored

testprocess.py is a python file?

Why is testprocess a python file? It has nmclient.repy and session.repy embedded in the file that is committed in SVN. What the heck is going on?

Seash unit test gets false negatives

The seash unit tests fail when they should not. We get the following kind of error message:

Standard output:
.........................Produced.....................

.........................Expected.....................
None

This issue seems to occur only on Mac. This could be an issue with the utf module.

Need support added to choose layers when running utf tests...

In repyV2, one can easily interpose security layers, shims, models from CheckAPI, etc. into any program. We need to add support to utf to allow unit tests to be run with different modules.

For example, here is how I run the allpairspingv2 program with the check_api portability verification.

python repy.py restrictions.full encasementlib.repy check_api.repy dylink.repy librepy.repy allpairspingv2.repy 12345

Without check_api verification, it looks like this:

python repy.py restrictions.full encasementlib.repy dylink.repy librepy.repy allpairspingv2.repy 12345

Note: more than one security layer can be used at a time. I believe that all security layers will need to be loaded between the encasementlib and dylink. I'm not sure if this is true for shims. (Monzur: please chime in)

Non-existant child process causes run_tests to produce an error...

If the repy tests are run and there is an error that is causing repy to abort before forking a child process, this messes up run_tests. Namely, the odd-ball test that kills the monitor process errors out inside of run_tests. This prevents the person who is running the tests from seeing the error that is causing the repy errors.

Architect an improved unit testing infrastructure for the Seattle project

The current unit testing system was adequate at the time of its creation, but is now an inefficiency in developing the Seattle software. An improved unit testing system would define standard protocols and conventions on which to run automated tests. As the project is seeing increasing exposure, it is crucial that we be able to easily run all unit tests to make sure that bugs are detected early and fixed before a general release.

run_tests.py doesn't run python tests correctly.

run_tests.py doesn't run python tests correctly unless they are z_ tests. For example, a test py_n_foo.py will be run as: python repy.py py_n_foo.py.

This wasn't detected because this printed the usage message and this produces output (which passes!).

Check the number of openDHT servers and email if it's less than 100.

Arvind emailed and requested:

BTW, can you make one tweak to your check-up script that sends out an email if there are less than 100 servers listed on http://opendht.org/servers.txt

I can rig up something to do it, but I figured it might be easier for you to include that in your script. My hope is to have 250+ nodes on OpenDHT and I have a few scripts for keeping them up. I just wanted an independent checking script that might not get nixed by say the controlling machine going down.

We should make this change to the integration test.

UTF doesn't indicate which test is currently running

Right now when UTF is running a test there is no information printed. Once the test finishes, it prints the name and the result.

It would be nice for debugging tests that are timing out, and other weirdness if it would print the test name before it starts running.

ut_registerhttpcallback_dictionary_check.repy Error

following line makes error.

recvd_content = httpretrieve_get_string('http://127.0.0.1:12345/fake/path.html', http_query, http_post)

Error:
Uncaught exception! Following is a full traceback, and a user traceback.
The user traceback excludes non-user modules. The most recent call is displayed last.

Full debugging traceback:
"repy.py", line 189, in main
"/share/trunk/test_server/virtual_namespace.py", line 116, in evaluate
"/share/trunk/test_server/safe.py", line 304, in safe_run
"ut_registerhttpcallback_dictionary_check.repy", line 2021, in

User traceback:
"ut_registerhttpcallback_dictionary_check.repy", line 2021, in

Exception (with type 'exceptions.KeyError'): 'Content-Length'

Terminated

*note
I used the newest httpretrieve.repy and httpserver.py from the turnk.

many test_nat_servers_running.py using all memory on blackbox

blackbox.cs ran out of memory (including swap) and was refusing to allocate memory to new processes. My guess is that the culprit is the couple hundred instances of test_nat_servers_running.py that are running:

geni@blackbox:~$ ps -ef | grep "/usr/bin/python /home/integrationtester/cron_tests/nat_tests/test_nat_servers_running.py" | wc -l
257
geni@blackbox:~$ free
             total       used       free     shared    buffers     cached
Mem:        254016     248412       5604          0       1344      15688
-/+ buffers/cache:     231380      22636
Swap:       746980     746908         72

apache was the next biggest culprit, so maybe my seattlegeni testing on blackbox was the problem and the many nat test processes were intended. After restarting apache, here's the free memory:

jsamuel@blackbox:~$ free
             total       used       free     shared    buffers     cached
Mem:        254016     171804      82212          0       1520      24196
-/+ buffers/cache:     146088     107928
Swap:       746980     700772      46208

There's not a lot of memory to start with here, though, so if those nat tests aren't supposed to be hanging around, they should be cleared up.

Some unit tests fail, even after waiting

The following unit tests fail even after waiting. Tested when building a new installer for Mac.

ut_repytests_testconcurrentopenconns.py
ut_repytests_testnetbothdup.py
ut_repytests_testnetconndup.py
ut_repytests_testnetmessdup.py
ut_repytests_testrecvmessfunctions.py
ut_repytests_testwaitforconnfunctions.py

Integration tests aren't working...

I haven't had any emails and opendht is down. We need to know if our integration tests fail. They don't do us any good if they don't work.

Integration test to periodically pull nodemanager logs from nodes

I propose having an integration test that will pull nodemanager logs from our nodes (softwareupdater.old and nodemanager.old, *.new files if they exist as well) periodically, to potentially uncover issues that we may not be aware of. Monzur mentioned he had a script that was used for this purpose before, so if possible we should get that integrated into our monitoring test suite and into our repository.

UTF to support checking strings that span multiple lines

The strings that you tell UTF to check for can only be one line long. They cannot span multiple lines. For example, for the next code block, you cannot test for the entire string. You can only test for specific segments of the code, e.g. 'hello world' or 'this is a test message'. The newline character cannot be represented in the command used to indicate the test message.

# These work:
#pragma out hello world
# -- OR --
#pragma out this is a test message

# This will fail
#pragma out hello world\nthis is a test message

# Test case:
print "hello world"
print "this is a test message"

UTF test has error on windows xp when the test ends.

After the UTF has run the tests of a module (for example uft.py -m nm). The test module tries to kill a subprocess and fails. The error is noted below:

Now killing subprocess: ut_nm_subprocess.py
Internal error. Trace:
--------------------------------------------------------------------------------
Traceback (most recent call last):
  File "utf.py", line 596, in <module>
    main()
  File "utf.py", line 173, in main
    test_module(module_name, module_file_list)
  File "utf.py", line 284, in test_module
    os.kill(sub.pid, signal.SIGTERM)
AttributeError: 'module' object has no attribute 'kill'

run_tests stderr stripping issue

exec_command(command) parses for "Terminated" token which is specific only to some shells. ZSH for instance acts differently, producing:

zsh: terminated python ./repy.py ./restrictions.fullcpu re_semaphore_simple.py

Code responsible for this:

236 # I want to look at the last line. it ends in \n, so I use index -2
237 if len(theout.split('\n')) > 1 and theout.split('\n')[-2].strip() == 'Terminated':

P. S. run_tests.py is full of issues. I might group all of them in a single ticket.

preparetest -t should include the software updater tests.

When preparetest -t is run, it should include the necessary files to run the software updater tests. This will make it easier for us to test that the software updater is working.

This will also include the tests in the installers built by make_base_installers when they are told to include the unit tests.

UTF accepts bogus and zero arguments

UTF should reject invalid arguments (including no arguments).

It current accepts:

"python utf.py" --runs all tests
"python utf.py asd" --runs all tests

utf ignores second and further arguments to #pragma repy

r6986 removed blanket dylink support for unit tests, claiming some tests would break otherwise. Unfortunately, it also removed a perfectly valid statement that allowed for the inclusion of more parameters in the #pragma repy directive than only a restrictions file.

Additional parameters were possible before, and the comment to r6986 only extends to the dylink line added in r6926, so I conclude this removal was done in error. I'm adding the line back in.

Error in verifyfiles.py

There is a simple typographical error in verifyfiles.py (a script that checks for consistency of a seattle install).

Produces the following error msg:

File "verifyfiles.py", line 635, in <module>
     main()
   File "verifyfiles.py", line 620, in main
     look_dir = sys.argv[  File "verifyfiles.py", line 576, in verify_files
     filestatus_dict[reqfile](3])
)[0] + " not found"
 KeyError: 'xmlrpc_common.repy

Ability to run individual unit tests

Something similar to run_tests.py but with the ability to run a selective test, or subset of tests. Now, if you only fail a few unit tests, it is necessary to re-run all of them. This is especially annoying when working with slow systems. (E.g. Windows under virtualization or mobile devices).

Nodes with stopping processes

I ran testing on problematic nodes (that I brought up a few days ago using start_seattle.sh). Here are some of the nodes with a long uptime but, failed tests.
(longest *** separates hosts)
shorter *** sequence separates separate problematic runs for that host: first: run to check uptime, second - run regular test (never mind that 1st and 2nd are separate runs), third - after starting them up (just indicates that they were brought up)


Testing on node: pl1.6test.edu.cn

REACHED!
file_checker_finished
[Node Manager is not running.
ProcessCheckerFinished
08:00:29 up 6 days, 17:14, 0 users, load average: 2.20, 1.69, 1.60
PROBLEM OCCURRED!
PO! pl1.6test.edu.cn


Testing on node: pl1.6test.edu.cn

REACHED!
file_checker_finished
NodeManager Node Manager is not running.
ProcessCheckerFinished
PROBLEM OCCURRED!
PO! pl1.6test.edu.cn


Testing on node: pl1.6test.edu.cn

REACHED!
file_checker_finished
ProcessCheckerFinished
GOOD!


Testing on node: planetdev05.fm.intel.com

REACHED!
file_checker_finished
[Software Updater is not running.
NodeManager Node Manager is not running.
ProcessCheckerFinished
07:58:40 up 94 days, 12:09, 0 users, load average: 1.81, 1.59, 1.60
PROBLEM OCCURRED!
PO! planetdev05.fm.intel.com


Testing on node: planetdev05.fm.intel.com

REACHED!
file_checker_finished
[Software Updater is not running.
NodeManager Node Manager is not running.
ProcessCheckerFinished
PROBLEM OCCURRED!
PO! planetdev05.fm.intel.com


Testing on node: planetdev05.fm.intel.com

REACHED!
file_checker_finished
ProcessCheckerFinished
GOOD!


Testing on node: planetlab2.fri.uni-lj.si

REACHED!
file_checker_finished
[Software Updater is not running.
ProcessCheckerFinished
07:59:23 up 86 days, 19:15, 0 users, load average: 1.84, 1.80, 1.96
PROBLEM OCCURRED!
PO! planetlab2.fri.uni-lj.si


Testing on node: planetlab2.fri.uni-lj.si

REACHED!
file_checker_finished
SoftwareUpdater Software Updater is not running.
ProcessCheckerFinished
PROBLEM OCCURRED!
PO! planetlab2.fri.uni-lj.si


Testing on node: planetlab2.fri.uni-lj.si

REACHED!
file_checker_finished
ProcessCheckerFinished
GOOD!


Testing on node: planetlab2.hiit.fi

REACHED!
file_checker_finished
[Node Manager is not running.
ProcessCheckerFinished
07:59:42 up 123 days, 51 min, 0 users, load average: 2.38, 2.24, 2.49
PROBLEM OCCURRED!
PO! planetlab2.hiit.fi


Testing on node: planetlab2.hiit.fi

REACHED!
file_checker_finished
NodeManager Node Manager is not running.
ProcessCheckerFinished
PROBLEM OCCURRED!
PO! planetlab2.hiit.fi


Testing on node: planetlab2.hiit.fi

REACHED!
file_checker_finished
ProcessCheckerFinished
GOOD!


Testing on node: planetlab2.uc.edu

REACHED!
file_checker_finished
[Software Updater is not running.
NodeManager Node Manager is not running.
ProcessCheckerFinished
07:58:41 up 10 days, 5:39, 0 users, load average: 0.69, 0.80, 0.78
PROBLEM OCCURRED!
PO! planetlab2.uc.edu


Testing on node: planetlab2.uc.edu

REACHED!
file_checker_finished
[Software Updater is not running.
NodeManager Node Manager is not running.
ProcessCheckerFinished
PROBLEM OCCURRED!
PO! planetlab2.uc.edu


Testing on node: planetlab2.uc.edu

REACHED!
file_checker_finished
ProcessCheckerFinished
GOOD!


Testing on node: planetlab3.csail.mit.edu

REACHED!
file_checker_finished
[Software Updater is not running.
NodeManager Node Manager is not running.
ProcessCheckerFinished
07:59:33 up 162 days, 4:36, 0 users, load average: 6.84, 7.99, 8.69
PROBLEM OCCURRED!
PO! planetlab3.csail.mit.edu


Testing on node: planetlab3.csail.mit.edu

REACHED!
file_checker_finished
[Software Updater is not running.
NodeManager Node Manager is not running.
ProcessCheckerFinished
PROBLEM OCCURRED!
PO! planetlab3.csail.mit.edu


Testing on node: planetlab3.csail.mit.edu

REACHED!
file_checker_finished
ProcessCheckerFinished
GOOD!


Testing on node: planetlab4-dsl.cs.cornell.edu

REACHED!
file_checker_finished
ProcessCheckerFinished
07:59:39 up 162 days, 11:27, 0 users, load average: 2.92, 2.47, 2.31
PROBLEM OCCURRED!
PO! planetlab4-dsl.cs.cornell.edu


Testing on node: planetlab4-dsl.cs.cornell.edu

REACHED!
file_checker_finished
[Node Manager memory usage is unusually high.
ProcessCheckerFinished
PROBLEM OCCURRED!
PO! planetlab4-dsl.cs.cornell.edu


Testing on node: planetlab4-dsl.cs.cornell.edu

REACHED!
file_checker_finished
ProcessCheckerFinished
GOOD!


Testing on node: planetlab-5.EECS.CWRU.Edu

REACHED!
file_checker_finished
SoftwareUpdater Software Updater is not running.
[Node Manager is not running.
ProcessCheckerFinished
07:58:17 up 127 days, 10:49, 0 users, load average: 0.33, 0.16, 0.11
PROBLEM OCCURRED!
PO! planetlab-5.EECS.CWRU.Edu


Testing on node: planetlab-5.EECS.CWRU.Edu

REACHED!
file_checker_finished
SoftwareUpdater Software Updater is not running.
[NodeManager] Node Manager is not running.
ProcessCheckerFinished
PROBLEM OCCURRED!
PO! planetlab-5.EECS.CWRU.Edu


Testing on node: planetlab-5.EECS.CWRU.Edu

REACHED!
file_checker_finished
ProcessCheckerFinished
GOOD!


Error in testing script

After running tests, testprocess.py errored out with the following error:

'''python ./seattle_repy/testprocess.py'''


Error, do this: mount -t proc none /proc
Traceback (most recent call last):
  File "./seattle_repy/testprocess.py", line 201, in <module>
    updater_mem = (rawstring.split())[1]
IndexError: list index out of range

The machine that this occurred on is a planetlab machine, planetlab04.cnds.unibe.ch

Additional info:
-Node is 0.1h
-All files in seattle_repy are correct (verified by hash)

utf.py does not run shutdown/subprocess/setup scripts without base test

As of the current revision (r7170), the ut_utftests_test_setup.py, ut_utftests_test_shutdown.py, ut_utftests_test_subprocess.py the utftests suite do not run.

Unit tests in the form of ut_modulename_testname_shutdown.py, ut_modulename_testname_subprocess.py, ut_modulename_testname_shutdown.py do not get run if there is no test named ut_modulename_testname. When ut_modulename_testname.py exists, then the shutdown/subprocess/shutdown scripts run normally.

ut_repytests_testlock2.py hangs on failure

When ut_repytests_testlock2.py acquires the third lock before acquiring the second lock, it fails, but it never releases the lock therefore the test hangs as the second lock tries to still acquire a lock that is not free.

Integration test for time server failing

I get this error when trying to run the time server integration test. It seems like it terminates before it can send out an error e-mail.

integrationtester@blackbox:~/cron_tests/timeserver_tests$ python test_time_servers_running.py
Tue Jan  7 12:50:34 2014 : Looking up time_servers
Traceback (most recent call last):
  File "test_time_servers_running.py", line 44, in <module>
    main()
  File "test_time_servers_running.py", line 34, in main
    servers = centralizedadvertise_lookup("time_server")
  File "/home/integrationtester/cron_tests/timeserver_tests/centralizedadvertise_repy.py", line 52, in centralizedadvertise_lookup
    sockobj = timeout_openconn(servername,serverport, timeout=10)
  File "/home/integrationtester/cron_tests/timeserver_tests/sockettimeout_repy.py", line 95, in timeout_openconn
    tsock.connect((desthost, destport))
  File "/home/integrationtester/cron_tests/timeserver_tests/sockettimeout_repy.py", line 203, in connect
    self._openconn()
  File "/home/integrationtester/cron_tests/timeserver_tests/sockettimeout_repy.py", line 277, in _openconn
    self.sockobj = openconn(destip, destport)
  File "/home/integrationtester/cron_tests/timeserver_tests/emulcomm.py", line 1339, in openconn
    comminfo[handle]['socket'].connect((desthost,destport))
  File "<string>", line 1, in connect
socket.gaierror: (-2, 'Name or service not known')

The following needs to be done:

  • Make integration test runnable
  • Report all exceptions via e-mail notification

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.