Coder Social home page Coder Social logo

nunit / nunit-console Goto Github PK

View Code? Open in Web Editor NEW
210.0 210.0 148.0 16.49 MB

NUnit Console runner and test engine

License: MIT License

PowerShell 0.05% Shell 0.02% C# 99.68% Batchfile 0.01% Scilab 0.01% XSLT 0.11% C++ 0.10% Dockerfile 0.03%
c-sharp dotnet hacktoberfest nunit nunit-console tdd test-runner testing testing-tools

nunit-console's Introduction

NUnit 4 Framework

Follow NUnit Slack NUnit issues marked with "help wanted" label NUnit issues marked with "good first issue" label

NUnit is a unit-testing framework for all .NET languages. It can run on macOS, Linux and Windows operating systems. NUnit can be used for a wide range of testing, from unit testing with TDD to full fledged system and integration testing. It is a non-opinionated, broad and deep framework with multiple different ways to assert that your code behaves as expected. Many aspects of NUnit can be extended to suit your specific purposes.

The latest version, version 4, is an upgrade from the groundbreaking NUnit 3 framework. It is a modernized version, aimed at taking advantage of the latest .NET features and C# language constructs.

If you are upgrading from NUnit 3, be aware of the breaking changes. Please see the NUnit 4 Migration Guide and take care to prepare your NUnit 3 code before you do the upgrade.

Table of Contents

Downloads

The latest stable release of the NUnit Framework is available on NuGet or can be downloaded from GitHub. Pre-release builds are available on MyGet.

Documentation

Documentation for all NUnit projects can be found at the documentation site.

Contributing

For more information on contributing to the NUnit project, please see CONTRIBUTING.md and the Developer Docs.

NUnit 3.0 was created by Charlie Poole, Rob Prouse, Simone Busoli, Neil Colvin and numerous community contributors. A complete list of contributors since NUnit migrated to GitHub can be found on GitHub.

Earlier versions of NUnit were developed by Charlie Poole, James W. Newkirk, Alexei A. Vorontsov, Michael C. Two and Philip A. Craig.

License

NUnit is Open Source software and NUnit 4 is released under the MIT license. Earlier releases used the NUnit license. Both of these licenses allow the use of NUnit in free and commercial applications and libraries without restrictions.

NUnit Projects

NUnit is made up of several projects. When reporting issues, please try to report issues in the correct project.

Core Projects

  • NUnit Test Framework - The test framework used to write NUnit tests (this repository)
  • NUnit Visual Studio Adapter - Visual Studio/Dotnet adapter for running NUnit 3 and 4 tests in Visual Studio or the dotnet command line.
  • NUnit Console and Engine - Runs unit tests from the command line and provides the engine that is used by other test runners to run NUnit tests.

Visual Studio Extensions

NUnit Engine Extensions

nunit-console's People

Contributors

adamconnelly avatar asbjornu avatar aschlapsi avatar blythmeister avatar charliepoole avatar chris-smith-zocdoc avatar chrismaddock avatar constructor-igor avatar ctaggart avatar jnm2 avatar joeldickson avatar manfred-brands avatar mano-si avatar michaelhofer-slg avatar michal-franc avatar mikkelbu avatar mjedrzejek avatar nikolaypianikov avatar osiristerje avatar oznetmaster avatar rprouse avatar sean-gilliam avatar seankilleen avatar sebazzz avatar simoneb avatar tdctaz avatar teyc avatar tom-dudley avatar voloda avatar x789 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nunit-console's Issues

Update target information in BUILDING.md

@ChrisMaddock commented on Wed Aug 10 2016

Some of the target information is outdated in BUILDING.md. Some targets no longer exist, others have incorrect dependency information (e.g. testing relying on building.)

Suggest this waits until after the split, as I'm sure it will only change more - maybe we also rewrite it in a more future-proof way. πŸ˜„


@CharliePoole commented on Mon Aug 15 2016

Duplicating this in both repositories.

Create a .NET Standard version of the Engine

@rprouse commented on Mon Dec 14 2015

This needs design, but could be platform specific agents. If so, it will likely be blocked by #362.


@rprouse commented on Sat Jan 02 2016

In #1168, @CharliePoole said,

I'm not sure there is a problem here since this refers (in my mind anyway) to an engine running on the desktop. What we need to find out is whether an engine running under .NET 2.0/3.5 is able to analyze assemblies built for other targets. I don't see why not, since they are not actually loaded. Consider that we are already analyzing .NET 4.5 assemblies. Of course, this needs to be tested on a VM that has only .NET 2.0 or 3.5 installed.

Based on my spikes into this so far, I think this is the way we need to go. The engine can inspect the test assembly and determine it's target platform. We just can't run it directly.

This is also mixed up with #677 and #362. My idea so far is as follows,

  • The current NUnit3 engine inspects the test assembly in addition to the framework. It looks for the TargetFrameworkAttribute which will tell you for example if it is targeting .NETCore,Version=v5.0.
  • If the target framework is a non-desktop framework, disallow --inprocess since we can't run non-desktop targets. We might also want to disallow --process=Single or we make sure all test assemblies target the same platform.
  • Modify TestAgency to allow it to launch platform specific agents and communicate with them in a manner other than .NET remoting. This is #362. If we are running those agents on the desktop machine, we could even capture Console.In/Out for communications.

The platform specific agents are where decisions still need to be made. It would be nice to have a portable mini-engine (#677) that is capable of loading and running the portable framework. Like the full framework, this would allow us to run test no matter which version of the NUnit 3 framework is used. It would likely be a stripped down version of the engine that can only load from the agent directory and is only intended to be used by agents. This might even boil down to a portable IFrameworkDriver and some supporting classes?

A simplified and not mutually exclusive approach might be that the agents are actually the nunitlite version of the tests. We could modify nunitlite to run directly, or be able to communicate with the engine.


@CharliePoole commented on Sat Jan 02 2016

I think we're pretty much on the same page at a high level. Some details...

  • Can we count on TargetFrameworkAttribute being reliably set?
  • We can't precisely disallow all those options, although we might be able to in some cases. Basically, we can only look at explicit options, but if the defaults are used, we have to wait until we analyze the assembly and then reflect an error back to the runner.
  • I think we still have two very distinct options for running portable and device-based tests: a scaled down engine versus a scaled up driver. Neil has been pushing us strongly toward an engine for CF but I'm not sure that's the easiest approach. At some point, we have to spike both approaches. I think the easiest way to do that initially is right on the desktop itself.

Charlie


@rprouse commented on Mon Jan 04 2016

Can we count on TargetFrameworkAttribute being reliably set?

I have tested quite a few different assemblies and have not found it unset except for .NET 2.0 assemblies. I think that we can safely assume that if it is not set, it targets a full .NET framework. I assume that these days with so many potential targets, it needs to be set.

I think we still have two very distinct options for running portable and device-based tests: a scaled down engine versus a scaled up driver. Neil has been pushing us strongly toward an engine for CF but I'm not sure that's the easiest approach. At some point, we have to spike both approaches. I think the easiest way to do that initially is right on the desktop itself.

I installed the .NET Portability Analyzer extension for Visual Studio and analyzed the engine against the platforms that we want to support. Based on what is not available, I think approaching it from the driver end is probably the best option. At a minimum, we need to be able to load the framework in a version independent way, explore or run the tests and communicate the results. The majority of the functionality of the engine is used in the startup process, not in the agents themselves.


@xied75 commented on Wed Feb 17 2016

πŸ‘ to track.


@rprouse commented on Wed Feb 17 2016

@xied75, for future reference, you can just click the subscribe button at the bottom of the toolbar on the left. That way, all other subscribers don't get an email πŸ˜‰

image


@xied75 commented on Wed Feb 17 2016

Well this is in fact a new trend on GitHub, since GitHub refuses to add the feature to allow one see what he/she subscribed, so now people do +1 in order to find what they want track.


@rprouse commented on Fri Jun 10 2016

I am going to move this out of 3.4. The new dotnet-test-nunit will run .NET Core and UWP apps and the promise of netstandard may change the way that we approach this. I want to see how things go after the release of .NET Core before committing to major changes in the engine.


@CharliePoole commented on Fri Jun 10 2016

That makes perfect sense.

Update --labels switch with option to show real-time pass/fail results in console runner

@dybs commented on Fri Aug 12 2016

When running a large number of test cases, it would be nice to see the pass/fail status of each case in the console as it completes. Using --labels=All, I can see which tests have run, but the results are not available until all cases have completed. Seeing which cases fail during a long scenario would allow me to investigate the failures while the remaining tests continue to run. It also has the advantage of knowing which cases passed/failed if the test scenario happens to crash and doesn't get to write the result file.

Per the discussion in #1735 some ideas are to modify --labels with the following options:

  • Results -> same behavior as =All, but includes a Pass/Fail/Error status
  • Before -> shows the test case before it executes, and appends the status after the test completes. Similar to On.
  • After -> shows the test case and status together after the test case completes.

@CharliePoole commented on Fri Aug 12 2016

I like this idea, but I would modify/simplify it by preference:

  • Off No change
  • On No change
  • Before / All shows test label when the test starts executing.
  • After Shows all tests after they have completed, including status.

Note that All is described as it is supposed to work already. There's currently a bug in that it runs at the end of a test. We should fix that. We should keep All so as not to break existing usage.

The introduction of After means that we have to treat it as also meaning On. That is, if a test produces output, there should be a label before that output and another one after it, together with the test status..

The change should be made in both the console runner and nunitlite. Once we split the repos, this issue will need to be duplicated.

The fix is relatively easy. Depending on who takes it, I can mentor them as needed.


@constructor-igor commented on Sun Aug 14 2016

Hi @CharliePoole,

Probably, I can start to work on the issue.
I checked all existing options of "--labels" parameter. Please, could you help me to understand suggested changes? I added sample with "--lablels=All". As I understand, the issue suggests to add "status" of each test?

image

thank you,
Igor.


@CharliePoole commented on Sun Aug 14 2016

Hi @constructor-igor - I've been meaning to poke you to see what you are up to. :-)

As you probably figured out, all the code is in Console's TestEventHandler and in NUnitLite's TestRunner. There are tests for TestEventHandler but not for TestRunner. :-(

I would first off fix the problem with All, which is supposed to be issued at the start of each test, not at the end. I think I broke this when I was fixing something else.

Once All is correct, you've got Before, since it's intended to work the same way.

After is the new thing, which is supposed to show status of the test. For that, I would use a combination of the result and label attributes to display a set of values like...

  • Passed
  • Failed
  • Error
  • Invalid
  • Ignored
  • Explicit
  • Skipped
  • Inconclusive
    In NUnitLite, you'll have the ResultState available to use for equivalent logic.

@constructor-igor commented on Mon Aug 15 2016

I am going to start from "All".
But, right now, I don't understand expected behavior of "All".
So, when I'll investigate code I'll contact with you.


@CharliePoole commented on Mon Aug 15 2016

You might try this sequence - a bit different order from my first suggestion:

  • Change "All" in existing code everywhere to "After" because All is now incorrectly running after the test.
  • Add status info to After
  • Add "Before" to run before the test
  • Add back "All" as a synyonym for Before

@constructor-igor commented on Mon Aug 15 2016

I try to build nunit by "build.cmd":
image

VS2013:
image


@CharliePoole commented on Mon Aug 15 2016

Are you using the latest master? Can you build in the IDE?


@constructor-igor commented on Mon Aug 15 2016

I cloned today and tried to build via build.cmd and IDE VS2013: same errors.


@CharliePoole commented on Mon Aug 15 2016

Using VS 2015?


@rprouse commented on Mon Aug 15 2016

I just got latest from master and ran .\build.cmd and it builds fine for me. Maybe do a full clean including deleting your packages folder and rebuild?


@constructor-igor commented on Mon Aug 15 2016

I could build nunit solution on other computer.
I tried to build on my computer again (downloaded master to new folder) and see the error.

I investigate the issue and found, probably, same issues

I continue my investigation.

ProcessRunner.RunTestsAsync doesn't work

The AsyncTestEngineResult returned from RunRestsAsync never shows as completed. This error doesn't actually affect how we currently work, since we don't use the RunAsync method in the console runner. However, we may use it in the future.

Support multiple installed engines

@CharliePoole commented on Sat Dec 12 2015

This may end up being a non-issue, if our existing code takes care of the job, but I have a suspicion it doesn't. Here's what we need to verify and/or implement:

  • Multiple engines can be present in well-known locations or pointed to from the registry or settings.
  • All such engines should be examined by the EngineActivator when deciding which engine to use.
  • Each engine may potentially have it's own set of extensions.
  • Each engine may also point to a set of shared extensions.
  • In addition to such "findable" engines, any runner may be packaged with a private engine. Such an engine will not be available to other runners.
  • The engine that is found and used by a runner should be easily identifiable by the user, in order to facilitate bug reporting and debugging.

While this is not a user-facing feature, being able do the above is a prerequisite for other features we want to implement.


@CharliePoole commented on Mon Jun 13 2016

Postponing this... it's important but not urgent.

Allow Extensions on Extensions

@CharliePoole commented on Wed Aug 26 2015

Currently, it's possible to extend the engine but extensions may not have extensions, which was one of the capabilities provided by Mono.Addins. Such a capability would be generally useful to users and would most likely be used by us.

For example, we might want to make the ProjectService and extension, allowing us to deploy the engine either with or without that capability. [We now do that by using a special build, the core engine.]

However, since the ProjectService has extensions itself, it cannot yet become an extension.

Implementation would be identical to the existing identification of extension points. The extension assembly would be examined for assembly-level ExtensionPointAttributes. The key issue is to ensure that assemblies are examined in the proper order. This may require introduction of an attribute to indicate dependencies.

Make Timeouts work without using an extra thread

@CharliePoole commented on Sat Mar 19 2016

PR #1357 reduces the number of supplementary threads created by TestWorkers. One place where we still create a thread is if there is a timeout. For some users who set a default timeout, this may mean every test case.

In order to cancel a running test that is using the actual worker thread, we need to force that worker to stop. Then we either need a way to restart it on a different thread or dispose of it and create a new worker. One general way to do this could be to allow TestWorkers to be added and removed as the tests are running, without affecting the tests themselves except possibly for the one test being cancelled. That's the approach I will try initially.

RemotingException in NUnit.Engine.Runners.ProcessRunner.UnloadPackage

@vitaliusvs commented on Mon Jun 27 2016

NUnit version 3.4.0
mono 4.4.0-182
CentOS 6.8

I'm running a lot of tests from a single assembly and unfortunately unable to isolate small subset of tests that are causing nunit to crash after all tests are complete and nunit is about to finish. There is stack trace of the crash:

  • Assertion: should not be reached at sgen-scan-object.h:101

Stacktrace:

Native stacktrace:

    /opt/xstream-mono/bin/mono() [0x49df85]
    /lib64/libpthread.so.0(+0xf7e0) [0x7fa466a187e0]
    /lib64/libc.so.6(gsignal+0x35) [0x7fa4664915e5]
    /lib64/libc.so.6(abort+0x175) [0x7fa466492dc5]
    /opt/xstream-mono/bin/mono() [0x6422f2]
    /opt/xstream-mono/bin/mono() [0x64207c]
    /opt/xstream-mono/bin/mono() [0x64223c]
    /opt/xstream-mono/bin/mono() [0x5ffa95]
    /opt/xstream-mono/bin/mono() [0x5f1c8e]
    /opt/xstream-mono/bin/mono() [0x5f4644]
    /opt/xstream-mono/bin/mono() [0x5f49d4]
    /opt/xstream-mono/bin/mono() [0x5f4b9b]
    /opt/xstream-mono/bin/mono() [0x5f4f23]
    /opt/xstream-mono/bin/mono() [0x5a7c5a]
    /opt/xstream-mono/bin/mono() [0x6398ca]
    /lib64/libpthread.so.0(+0x7aa1) [0x7fa466a10aa1]
    /lib64/libc.so.6(clone+0x6d) [0x7fa466547aad]

Debug info from gdb:

=================================================================
Got a SIGABRT while executing native code. This usually indicates
a fatal error in the mono runtime or one of the native libraries
used by your application.

System.Runtime.Remoting.RemotingException: Tcp transport error.

Server stack trace:
at System.Runtime.Remoting.Channels.Tcp.TcpMessageIO.ReceiveMessageStatus (System.IO.Stream networkStream, System.Byte[] buffer) <0x41be25a0 + 0x000d3> in :0
at System.Runtime.Remoting.Channels.Tcp.TcpClientTransportSink.ProcessMessage (IMessage msg, ITransportHeaders requestHeaders, System.IO.Stream requestStream, ITransportHeaders& responseHeaders, System.IO.Stream& responseStream) <0x41bea010 + 0x001f3> in :0
at System.Runtime.Remoting.Channels.BinaryClientFormatterSink.SyncProcessMessage (IMessage msg) <0x41be94d0 + 0x003d7> in :0

Exception rethrown at [0]:
---> System.Runtime.Remoting.RemotingException: Connection closed
at System.Runtime.Remoting.Channels.Tcp.TcpMessageIO.StreamRead (System.IO.Stream networkStream, System.Byte[] buffer, Int32 count) <0x41be28f0 + 0x0009f> in :0
at System.Runtime.Remoting.Channels.Tcp.TcpMessageIO.ReceiveMessageStatus (System.IO.Stream networkStream, System.Byte[] buffer) <0x41be25a0 + 0x00053> in :0
--- End of inner exception stack trace ---
at (wrapper managed-to-native) System.Object:__icall_wrapper_mono_remoting_wrapper (intptr,intptr)
at (wrapper remoting-invoke) NUnit.Engine.Agents.RemoteTestAgent:Unload ()
at NUnit.Engine.Runners.ProcessRunner.UnloadPackage () <0x41bdcf90 + 0x0005a> in :0


@CharliePoole commented on Mon Jun 27 2016

Are you using the NUnit console runner? Does it produce any output at all?


@vitaliusvs commented on Tue Jun 28 2016

Yes, I'm using nunit3-console.exe. Tests run correctly, but nunit process crashes when nunit is about to finish. I found the test that was causing the trouble. The test was starting quartz scheduler http://www.quartz-scheduler.org/ in test fixture setup. I fixed the test by shutting down quartz scheduler in test fixture tear down. However I still think nunit process should not crash once nunit agent fails.


@CharliePoole commented on Tue Jun 28 2016

This may be the same problem as #1628, assuming your scheduler is running on a thread that didn't allow the agent process to unload and terminate. At a minimum, I suspect it's related, so let's wait till 3.4.1 comes out today or tomorrow.


@CharliePoole commented on Mon Aug 29 2016

@vitaliusvs Have you tried this on 3.4.1? Did it solve the problem?


@CharliePoole commented on Mon Aug 29 2016

This should have been moved to the nunit-console repo when we split repos. Doing it now.

Nunit 3.4.1 NUnit.Engine.Runners

@elinato commented on Fri Jul 15 2016

I run into problems, Nunit 3.4.1 sometimes doesn't terminate nunit-agent-x86.exe, using TeamCity

I get this from TeamCity:

[Step 6/6] Unhandled Exception: System.InvalidOperationException: LocalDataStoreSlot storage has been freed. [10:38:55][Step 6/6] at System.LocalDataStore.SetData(LocalDataStoreSlot slot, Object data) [10:38:55][Step 6/6] at System.Threading.Thread.SetData(LocalDataStoreSlot slot, Object data) [10:38:55][Step 6/6] at System.Windows.Interop.ComponentDispatcher.get_CurrentThreadData() [10:38:55][Step 6/6] at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) [10:38:55][Step 6/6] at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) [10:38:55][Step 6/6] at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) [10:38:55][Step 6/6] at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) [10:38:55][Step 6/6] at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler) [10:38:55][Step 6/6] at System.Windows.Threading.Dispatcher.LegacyInvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Int32 numArgs) [10:38:55][Step 6/6] at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) [10:38:58][Step 6/6] System.Net.Sockets.SocketException (0x80004005): An existing connection was forcibly closed by the remote host [10:38:58][Step 6/6] [10:38:58][Step 6/6] Server stack trace: [10:38:58][Step 6/6] at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) [10:38:58][Step 6/6] at System.Runtime.Remoting.Channels.SocketStream.Read(Byte[] buffer, Int32 offset, Int32 size) [10:38:58][Step 6/6] at System.Runtime.Remoting.Channels.SocketHandler.ReadFromSocket(Byte[] buffer, Int32 offset, Int32 count) [10:38:58][Step 6/6] at System.Runtime.Remoting.Channels.SocketHandler.Read(Byte[] buffer, Int32 offset, Int32 count) [10:38:58][Step 6/6] at System.Runtime.Remoting.Channels.SocketHandler.ReadAndMatchFourBytes(Byte[] buffer) [10:38:58][Step 6/6] at System.Runtime.Remoting.Channels.Tcp.TcpSocketHandler.ReadAndMatchPreamble() [10:38:58][Step 6/6] at System.Runtime.Remoting.Channels.Tcp.TcpSocketHandler.ReadVersionAndOperation(UInt16& operation) [10:38:58][Step 6/6] at System.Runtime.Remoting.Channels.Tcp.TcpClientSocketHandler.ReadHeaders() [10:38:58][Step 6/6] at System.Runtime.Remoting.Channels.Tcp.TcpClientTransportSink.ProcessMessage(IMessage msg, ITransportHeaders requestHeaders, Stream requestStream, ITransportHeaders& responseHeaders, Stream& responseStream) [10:38:58][Step 6/6] at System.Runtime.Remoting.Channels.BinaryClientFormatterSink.SyncProcessMessage(IMessage msg) [10:38:58][Step 6/6] [10:38:58][Step 6/6] Exception rethrown at [0]: [10:38:58][Step 6/6] at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) [10:38:58][Step 6/6] at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) [10:38:58][Step 6/6] at NUnit.Engine.ITestEngineRunner.Unload() [10:38:58][Step 6/6] at NUnit.Engine.Runners.ProcessRunner.UnloadPackage() [10:38:58][Step 6/6] at NUnit.Engine.Runners.AbstractTestRunner.Unload() [10:38:58][Step 6/6] at NUnit.Engine.Runners.AggregatingTestRunner.UnloadPackage() [10:38:58][Step 6/6] at NUnit.Engine.Runners.AbstractTestRunner.Unload() [10:38:58][Step 6/6] at NUnit.Engine.Runners.MasterTestRunner.UnloadPackage() [10:38:58][Step 6/6] at NUnit.Engine.Runners.AbstractTestRunner.Dispose(Boolean disposing) [10:38:58][Step 6/6] at NUnit.Engine.Runners.MasterTestRunner.Dispose(Boolean disposing) [10:38:58][Step 6/6] at NUnit.Engine.Runners.AbstractTestRunner.Dispose() [10:38:58][Step 6/6] at NUnit.ConsoleRunner.ConsoleRunner.RunTests(TestPackage package, TestFilter filter) [10:38:58][Step 6/6] at NUnit.ConsoleRunner.Program.Main(String[] args) [10:38:58][Step 6/6] Process exited with code -100 [10:38:58][Step 6/6] Step Run tests (NUnit) failed


@CharliePoole commented on Fri Jul 15 2016

Based on the stacktrace, it does not seem to be the case that the agent process fails to terminate. In fact there is a SocketException with the message "An existing connection was forcibly closed by the remote host" which would usually indicate that the agent had already terminated.

That's a normal situation after a run has terminated so I'm uncertain why it causes a problem with Teamcity. Are you absolutely sure that TC is using NUnit 3.4.1?


@elinato commented on Sun Jul 17 2016

In TeamCity I use this command line: packages\NUnit.ConsoleRunner.3.4.1\tools\nunit3-console.exe

image

Can we make domain=None work without copying files?

@jtattermusch commented on Mon Apr 11 2016

When using https://www.nuget.org/packages/NUnit.ConsoleRunner/, nunit3-console.exe fails becase it is not able to locate nunit.framework.dll. That file is usually sitting in the same directory as the assembly containing tests, but nunit3-console doesn't make an attempt to load it. Alternatively, nunit.framework.dll could be part of NUnit.ConsoleRunner package directly.


@CharliePoole commented on Mon Apr 11 2016

This is not something anyone else has observed, so it's probably due to some unique situation in your environment. Can you provide details of how to reproduce it? You should include info about the command-line you are using the versions of both the runner and the framework. Is the framework also installed using NuGet? What error message do you get?

FYI, the framework is entirely separate from the console runner and will eventually be separated into it's own project. The runner and engine do not reference the framework at all and can, in fact run tests written with any framework for which a driver is written. That's how we are able to run NUnit V2 tests, which is an entirely different framework, albeit with the same file name. So making the framework part of the NuGet package would be a step backward to the approach we used in NUnit V2.


@jtattermusch commented on Mon Apr 11 2016

It seems that what's causing trouble is the --domain=None argument. When specified, it is impossible to use nunit3-console.exe without manually copying files around.

To reproduce:

1.) Install NUnit.ConsoleRunners (version 3.2.0) nuget package to your solution and NUnit nuget package to your test project
2.) Build your test project
3.) From commandline, try to run the tests using NUnit.ConsoleRunners

packages/NUnit.ConsoleRunner.3.2.0/tools/nunit3-console.exe --labels=All --noresult --workers=1 --domain=None TestProject/bin/Debug/TestProject.dll

That results in:
Could not load file or assembly 'nunit.framework' or one of its dependencies. The system cannot find the file specified.


@CharliePoole commented on Mon Apr 11 2016

That's how domain=none has always worked, although it could be we neglected to bring over any documentation about it when we went to the new 3.0 documentation wiki.

NUnit creates a domain with the appbase set to the location of an assembly it is running, which is what enables it to find things. In certain extremely rare cases, people are testing features that can only run in a primary domain, so they use domain=None, copying all needed files, including NUnit components into a common directory.

Do you have a good reason to be using domain=None?

You should be aware that the default settings for ProcessModel and DomainUsage are designed to give the least trouble. That wasn't true before 3.0.


@jtattermusch commented on Mon Apr 11 2016

Your explanation makes sense. I was probably confused by the fact that there's no documentation about the limitations of domain=None.

I don't strictly need to use domain=None, but I am testing a projects that loads a native library under the hoods, so if there's a bug in the native library, the entire process might crash, so I am trying to go as ligthweight as possible in terms of the test harness to prevent unexpected behavior. Ideally, I'd like to use both --inprocess and domain=None (that's why I originally started to experiment with NUnitLite)


@CharliePoole commented on Mon Apr 11 2016

In some ways, a separate process might give you better isolation. That would be especially true if you end up using the Gui - once it is released. Then crashing the separate process wouldn't bring down the gui itself.

However, NUnitLite does give the lightest weight approach. Since the framework and NUnitLite itself are referenced from your tests, everything is in the same directory too.


@CharliePoole commented on Mon Apr 11 2016

Added a documentation issue and made this one an idea so we can investigate improving how domain=None actually works. Retitled the issue to match.


@CharliePoole commented on Mon Aug 15 2016

Issue moved to nunit/dotnet-test-nunit #71 via ZenHub

NUnit Engine Tests fail if not run from test directory

@rprouse commented on Mon Oct 26 2015

Run the engine tests from the root of the solution with the following command line,

.\bin\Debug\nunit-console.exe .\bin\Debug\nunit.engine.tests.dll

5 tests fail. They pass if run from the bin\Debug directory. Probably an easy fix, but I think it is the tests themselves that are broken, not the engine itself. We should confirm that, then maybe move out to 3.2?

The failure are;

1) Error : NUnit.Engine.Services.Tests.DefaultTestRunnerFactoryTests.CorrectRunnerIsUsed("EngineTests.nunit",null,NUnit.Engine.Runners.ProcessRunner)
System.IO.FileNotFoundException : Could not find file 'C:\src\github\nunit\EngineTests.nunit'.
   at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
   at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy)
   at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options, String msgPath, Boolean bFromProxy)
   at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize)
   at System.Xml.XmlDownloadManager.GetStream(Uri uri, ICredentials credentials)
   at System.Xml.XmlUrlResolver.GetEntity(Uri absoluteUri, String role, Type ofObjectToReturn)
   at System.Xml.XmlTextReaderImpl.OpenUrlDelegate(Object xmlResolver)
   at System.Threading.CompressedStack.runTryCode(Object userData)
   at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData)
   at System.Threading.CompressedStack.Run(CompressedStack compressedStack, ContextCallback callback, Object state)
   at System.Xml.XmlTextReaderImpl.OpenUrl()
   at System.Xml.XmlTextReaderImpl.Read()
   at System.Xml.XmlLoader.Load(XmlDocument doc, XmlReader reader, Boolean preserveWhitespace)
   at System.Xml.XmlDocument.Load(XmlReader reader)
   at System.Xml.XmlDocument.Load(String filename)
   at NUnit.Engine.Services.ProjectLoaders.NUnitProject.Load(String filename) in C:\src\github\nunit\src\NUnitEngine\Addins\nunit-project-loader\NUnitProject.cs:line 127
   at NUnit.Engine.Services.ProjectLoaders.NUnitProjectLoader.LoadFrom(String path) in C:\src\github\nunit\src\NUnitEngine\Addins\nunit-project-loader\NUnitProjectLoader.cs:line 45
   at NUnit.Engine.Services.ProjectService.LoadFrom(String path) in C:\src\github\nunit\src\NUnitEngine\nunit.engine\Services\ProjectService.cs:line 131
   at NUnit.Engine.Services.ProjectService.ExpandProjectPackage(TestPackage package) in C:\src\github\nunit\src\NUnitEngine\nunit.engine\Services\ProjectService.cs:line 67
   at NUnit.Engine.Services.DefaultTestRunnerFactory.MakeTestRunner(TestPackage package) in C:\src\github\nunit\src\NUnitEngine\nunit.engine\Services\DefaultTestRunnerFactory.cs:line 93
   at NUnit.Engine.Services.Tests.DefaultTestRunnerFactoryTests.CorrectRunnerIsUsed(String files, String processModel, Type expectedType) in C:\src\github\nunit\src\NUnitEngine\nunit.engine.tests\Services\DefaultTestRunnerFactoryTests.cs:line 94

2) Error : NUnit.Engine.Services.Tests.DefaultTestRunnerFactoryTests.CorrectRunnerIsUsed("EngineTests.nunit","Single",NUnit.Engine.Runners.TestDomainRunner)
System.IO.FileNotFoundException : Could not find file 'C:\src\github\nunit\EngineTests.nunit'.
   <SNIP>
   at NUnit.Engine.Services.Tests.DefaultTestRunnerFactoryTests.CorrectRunnerIsUsed(String files, String processModel, Type expectedType) in C:\src\github\nunit\src\NUnitEngine\nunit.engine.tests\Services\DefaultTestRunnerFactoryTests.cs:line 94

3) Error : NUnit.Engine.Services.Tests.DefaultTestRunnerFactoryTests.CorrectRunnerIsUsed("EngineTests.nunit","Separate",NUnit.Engine.Runners.ProcessRunner)
System.IO.FileNotFoundException : Could not find file 'C:\src\github\nunit\EngineTests.nunit'.
   <SNIP>
C:\src\github\nunit\src\NUnitEngine\nunit.engine\Services\DefaultTestRunnerFactory.cs:line 93
   at NUnit.Engine.Services.Tests.DefaultTestRunnerFactoryTests.CorrectRunnerIsUsed(String files, String processModel, Type expectedType) in C:\src\github\nunit\src\NUnitEngine\nunit.engine.tests\Services\DefaultTestRunnerFactoryTests.cs:line 94

4) Error : NUnit.Engine.Services.Tests.DefaultTestRunnerFactoryTests.CorrectRunnerIsUsed("EngineTests.nunit","Multiple",NUnit.Engine.Runners.MultipleTestProcessRunner)
System.IO.FileNotFoundException : Could not find file 'C:\src\github\nunit\EngineTests.nunit'.
   <SNIP>
C:\src\github\nunit\src\NUnitEngine\nunit.engine\Services\DefaultTestRunnerFactory.cs:line 93
   at NUnit.Engine.Services.Tests.DefaultTestRunnerFactoryTests.CorrectRunnerIsUsed(String files, String processModel, Type expectedType) in C:\src\github\nunit\src\NUnitEngine\nunit.engine.tests\Services\DefaultTestRunnerFactoryTests.cs:line 94

5) Error : NUnit.Engine.Services.Tests.ResultServiceTests.CanGetWriter("user",System.Object[])
System.ArgumentException : Unable to load transform TextSummary.xslt
   at NUnit.Engine.Services.XmlTransformResultWriter..ctor(Object[] args) in C:\src\github\nunit\src\NUnitEngine\nunit.engine\Services\ResultWriters\XmlTransformResultWriter.cs:line 56
   at NUnit.Engine.Services.ResultService.GetResultWriter(String format, Object[] args) in C:\src\github\nunit\src\NUnitEngine\nunit.engine\Services\ResultService.cs:line 74
   at NUnit.Engine.Services.Tests.ResultServiceTests.CanGetWriter(String format, Object[] args) in C:\src\github\nunit\src\NUnitEngine\nunit.engine.tests\Services\ResultServiceTests.cs:line 65

@CharliePoole commented on Mon Oct 26 2015

I agree with your assessment.


@CharliePoole commented on Sat Nov 14 2015

@rprouse I think this is fixed with RC3. Can you confirm?


@rprouse commented on Sat Nov 14 2015

Tested and this is not fixed with RC 3.


@CharliePoole commented on Sat Nov 14 2015

OK. They seemed fixed for me, but I may not have run them in exactly the same way. We have a number of tests that depend on finding files in the right directory but they should be fixable.

Improved location of extensions

@Chraneco commented on Mon Aug 15 2016

Hi!

We recently updated to NUnit 3.4.1 which unfortunately broke the integration with TeamCity. After some research I found out that this is because the TeamCity integration was moved to a separate NuGet package.

In our build pipeline we copy the contents of packages\NUnit.ConsoleRunner.3*\tools as part of a build artifact to a separate location, leaving behind the contents of all other packages (including NUnit.Extension.TeamCityEventListener.*).
I was hoping that it was enough to copy the "teamcity-event-listener" DLL into the folder next to "nunit3-console.exe", but it didn't help. That is because the NUnit engine only looks at the following hard-coded directories for extensions (listed in the nunit.engine.addins file):

../../NUnit.Extension.*/**/tools/     # nuget v2 layout
../../../NUnit.Extension.*/**/tools/  # nuget v3 layout

Unfortunately, this directory structure does not exist where we run our tests.

Making the following adjustment solves the problem for us

../../NUnit.Extension.*/**/tools/     # nuget v2 layout
../../../NUnit.Extension.*/**/tools/  # nuget v3 layout
teamcity-event-listener.dll

because we now copy the DLL into the directory where the console runner is put but we don't want to make this change every time we update NUnit. Is there a more generic solution for this? For example, letting the console runner always search in its own location for extension DLLs would be a possibility.


@NikolayPianikov commented on Mon Aug 15 2016

@Chraneco you could restore NUnit using this command line nuget.exe install NUnit.Runners -Version 3.4.1 -o packages and copy all installed packages


@Chraneco commented on Mon Aug 15 2016

@NikolayPianikov Thanks for your comment. At the point when we execute the unit tests we are completely outside of a Visual Studio project. The build artifacts contain only the necessary files and in a different directory structure. Restoring NuGet packages there is not possible and also not wanted.


@NikolayPianikov commented on Mon Aug 15 2016

Is it possible to update the .addins file in the script where you make your own NUnit tool directory?


@Chraneco commented on Mon Aug 15 2016

All the files and directories are put in place by TeamCity itself via Artifact dependencies.


@CharliePoole commented on Mon Aug 15 2016

@Chraneco Yes... we are discovering that the existing setup for locating extensions is not sufficient.

The main problem is that extensions are looked for relative to the directory containing the engine. The paths in the .addins file are tailored for an engine installed as an extension to find an engine installed as an extension to find other engines installed as an extension. In a case like yours, it would be handy to be able to locate addins relative to the VS project or the test assembly itself.

Since we don't already have an issue on this general topic, I changed the title of this issue accordingly. Things we may consider:

  1. Adding additional roots where we look for .addins files
    • test assembly directory
    • global per-user location
    • global per-machine location
    • parent directory of the engine directory, recursively
  2. Add an option to the command-line to specify extension paths to search
  3. Convention-based approach, by which .addins files reference one another
    • Raises the question of which should be the true root

Running all tests in a solution needs more logging to console.

When running all tests using nunit3-console on *.sln it filters projects that reference nunit.framework but does not output to console which projects are detected.

When reviewing build logs part of CI it is impossible to know if tests were run correctly. The actual XML result is irrelevant in this case. It makes reviewing a build log impossible.

Cheers,
Calin

NUnit3 Runner skips end of line char in regular expression

I have 5 tests with the following full names:
NUnit3_Suites.MainSuite.Category1.Categoty1Test.category1_test_1
NUnit3_Suites.MainSuite.Category1.Categoty1Test.category1_test_2
NUnit3_Suites.MainSuite.Category1.TestSuite1.TestSuite1Test.testsuite1_test_1
NUnit3_Suites.MainSuite.Category1.TestSuite1.TestSuite1Test.testsuite1_test_2
NUnit3_Suites.MainSuite.Category1.TestSuite1.TestSuite1Test.testsuite1_test_3

I am trying to run tests by the following command:
nunit3-console.exe NUnit3-Suites.dll --where "test=~NUnit3_Suites.MainSuite.Category1.\w+.\w+$"

I expect only first two tests will be run. However it runs all of them http://prntscr.com/cgvpjk
I verified the regular expression I used with this https://regex101.com/ online tool -> I found only two matches http://prntscr.com/cgvruy
I also verified with C# code Regex.IsMatch which is mentioned you use inside NUnit for reg.exp.
The result - also only two matches found http://prntscr.com/cgvw2o

It seems NUnit skips the "end-of-line" ($) symbol in regular expression. Could you please check on your side within NUnit source code

Run test for multiple iteration will start multiple nunit agent

@levimm commented on Mon Aug 22 2016

Hi Guys,

I'm using this pattern to run test for multiple iterations.

string[] arg = {@"someDll.dll", "--dispose-runners" };
for (int i = 0; i < 3; i++) ConsoleRunner.Program.Main(arg);

In second iteration, two nunit-agent.exe are running, and in third iteration, three agents are running.
I notice the dll is always added to InputFiles in CommandLineOptions so the created testPackage contains multiple subpackages.
Is this intented?

Cheers


@CharliePoole commented on Tue Aug 23 2016

@levimm Moving this to our new nunit-console repository. Don't feel bad, it is brand new. You're probably only the first to get caught this way.

Mono.Cecil no longer supports .NET 2.0

@rprouse commented on Thu Dec 31 2015

The mono.cecil project dropped support for .NET 2.0 back in May, jbevain/cecil#220

Since our engine is compiled 2.0, we cannot update mono.cecil once they release a new version. Currently, their minimum framework version is .NET 3.5.

I was looking for a newer release of mono.cecil that has a PCL version. It isn't released yet, but it is needed for a portable engine.

Options,

  1. Move the console/engine to .NET 3.5
  2. Don't upgrade
  3. Create our own NuGet package that contains the older 2.0 version of the assembly and the newer PCL version πŸ‘Ž
  4. ???

Microsoft's end of support for .NET 2.0 is in April.

This is blocking #677 #1138 and to a lesser extent #362


@CharliePoole commented on Thu Dec 31 2015

Potentially a big deal! I'll start with the list of blocked issues:
#677 As it says in the description, it's not clear if this will be a true "engine" In fact, this could be the defining issue where we decide not to go the engine route.
#1138 I'm not sure there is a problem here since this refers (in my mind anyway) to an engine running on the desktop. What we need to find out is whether an engine running under .NET 2.0/3.5 is able to analyze assemblies built for other targets. I don't see why not, since they are not actually loaded. Consider that we are already analyzing .NET 4.5 assemblies. Of course, this needs to be tested on a VM that has only .NET 2.0 or 3.5 installed.
#362 I don't understand why you say this would be blocked.


@CharliePoole commented on Thu Dec 31 2015

It's not clear to me if 2.0 support was actually dropped rather than just talked about. PR jbevain/cecil/#220 looks like it was closed without merging. Of course, there is a good chance it will be at some point.


@rprouse commented on Thu Dec 31 2015

The current release on NuGet has 2.0, but no PCL. The nuspec currently checked into master no longer has 2.0 and includes PCL, so the next release will drop support.


@CharliePoole commented on Thu Dec 31 2015

I think we need to define our own goals in "supporting" .NET 2.0.

I have always thought of two different kinds of support for runtimes.

  1. We will run tests under that runtime.
  2. Our runners will execute under that runtime.

Currently, our runners all execute under 2.0 and higher and we can run tests under 2.0 and higher, which makes it seem like 1 and 2 are the same.

However, in NUnit 2.6.4 our runners ran under .NET 2.0 and higher and supported test execution under .NET 1.1, which helps remind us that there is a difference.

I considered requiring .NET 3.5 for the 3.0 console runner, while still supporting 2.0 for test execution. I didn't do it for a few reasons...

  1. It seemed better to support folks who test apps on machines with .NET 2.0 only.
  2. There is almost no difference between 2.0 and 3.5.
  3. Our codebase was already designed for 2.0.

In NUnit V2, supporting execution under a lower-level runtime than the one we use for NUnit itself required creation of separate builds for some components.

In 3.0, we already have a version of the framework for .NET 2.0. But if we were forced to use 3.5, we would have to create a separate 2.0 version, not using Cecil, along with a 2.0 agent to run in a separate process. Alternatively, we would execute 2.0 tests under 3.5.

None of this seems undoable, but I thnk we should first decide what we want to support and what we mean by support. I don't think MS's dropping of support (which means an entirely different thing for them) has much to do with us.


@CharliePoole commented on Thu Dec 31 2015

When the new release comes out, we should make sure we don't upgrade until
this is resolved.

On Thu, Dec 31, 2015 at 12:15 PM, Rob Prouse [email protected]
wrote:

The current release on NuGet has 2.0, but no PCL. The nuspec currently
checked into master no longer has 2.0 and includes PCL, so the next release
will drop support.

β€”
Reply to this email directly or view it on GitHub
nunit/nunit#1168 (comment).

NUnit3 result format output as nunit2 reports test dll as ignored if one or more child tests marked as ignored

@harrisonmeister commented on Fri Sep 02 2016

Hi,

I'm not sure if this is by design or an issue, so please feel free to close if by design.

I am running a test project using the NUNit3 console.exe, and I receive the following output:

<?xml version="1.0" encoding="utf-8" standalone="no"?>
<!--This file represents the results of running a test suite-->
<test-results name="D:\PATH_TO_LIBRARY\Some.Library.dll" total="2" errors="0" failures="0" not-run="1" inconclusive="0" ignored="1" skipped="0" invalid="0" date="2016-09-02" time="13:20:50">
  <environment nunit-version="3.4.1.0" clr-version="4.0.30319.42000" os-version="Microsoft Windows NT 10.0.10586.0" platform="Win32NT" cwd="D:\Git\Source" machine-name="REDACTEED" user="REDACTED" user-domain="REDACTED" />
  <culture-info current-culture="en-GB" current-uiculture="en-GB" />
  <test-suite type="Assembly" name="D:\PATH_TO_LIBRARY\Some.Library.dll" executed="False" result="Ignored">
    <properties>
      <property name="_PID" value="19472" />
      <property name="_APPDOMAIN" value="test-domain-" />
    </properties>
    <reason>
      <message><![CDATA[One or more child tests were ignored]]></message>
    </reason>
    <results>
      <test-suite type="TestSuite" name="A" executed="False" result="Ignored">
        <reason>
          <message><![CDATA[One or more child tests were ignored]]></message>
        </reason>
        <results>
          <test-suite type="TestSuite" name="B" executed="False" result="Ignored">
            <reason>
              <message><![CDATA[One or more child tests were ignored]]></message>
            </reason>
            <results>
              <test-suite type="TestSuite" name="C" executed="False" result="Ignored">
                <reason>
                  <message><![CDATA[One or more child tests were ignored]]></message>
                </reason>
                <results>
                  <test-suite type="TestFixture" name="D" executed="False" result="Ignored">
                    <reason>
                      <message><![CDATA[One or more child tests were ignored]]></message>
                    </reason>
                    <results>
                      <test-case name="A.B.C.D.Ignored_Test" executed="False" result="Ignored">
                        <properties>
                          <property name="_SKIPREASON" value="Ignored for some reason" />
                        </properties>
                        <reason>
                          <message><![CDATA[Ignored for some reason]]></message>
                        </reason>
                      </test-case>
                      <test-case name="A.B.C.D.Test_That_Works" executed="True" result="Success" success="True" time="0.027" asserts="0">
                        <reason>
                          <message><![CDATA[]]></message>
                        </reason>
                      </test-case>
                    </results>
                  </test-suite>
                </results>
              </test-suite>
            </results>
          </test-suite>
        </results>
      </test-suite>
    </results>
  </test-suite>
</test-results>

This marks the library test result as Ignored and not executed, even though one of the tests has run.

Specifically I have code which merged test result summaries (based on the nunit2 format). As NUnit2 (2.6.4) always added "time" and "asserts" as attributes, I'm just trying to work out if the result I am seeing in NUnit 3 is expected or not.

[TestFixture]
    public class ApisEstaCalculatorTests
    {
        [SetUp]
        public void SetUp()
        {

        }

        [Test]
        public void Test_That_Works()
        {
            Assert.Pass();
        }

        [Test, Ignore("Ignored for some reason")]
        public void Ignored_Test()
        {

        }
    }

Above is the sample code, and here is the associated console I am running
D:\packages\NUnit.ConsoleRunner\tools\nunit3-console.exe "--noheader" "--where=!(cat == 'Integration' || cat == 'IntegrationTest' || cat == 'IntegrationsTest' || cat == 'IntegrationTests' || cat == 'IntegrationsTests' || cat == 'Integration Test' || cat == 'Integration Tests' || cat == 'Integrations Tests' || cat == 'Approval Tests' || cat == 'AcceptanceTest' || cat == 'PerformanceTest')" "--labels=Off" "--timeout=900000" "--stoponerror" "--result=D:\TestResult-unit-64-0.xml;format=nunit2" "D:\PATH_TO_LIBRARY\Some.Library.dll"

My main issue is that
<test-suite type="Assembly" name="D:\PATH_TO_LIBRARY\Some.Library.dll" executed="False" result="Ignored"> doesnt contain the time or asserts attributes any more. Running against 2.6.4 console runner, it does.


@CharliePoole commented on Fri Sep 02 2016

It does seem like a bug to me. I'm moving it to the nunit-console project because it's the NUnit Engine, not the Framework, that translates nunit3 output to nunit2 format.

The XML produced by this translation is not going to be identical. We ensure that the format is correct, so that existing programs that process nunit2 XML files will be able to run. The semantics can obviously vary since NUnit 3 works differently from NUnit 2.

In this case, the biggest problem I see is that your assembly is shown as not having been executed. I suspect that's why there is no time and no assertion count - those attributes don't make sense if the assembly was never executed! But of course it was executed. :-)

In nunit3-console you cannot pass parameters containing ';' because they always get splitted

@SilverXXX commented on Fri Jul 22 2016

When running some tests with nunit3-console.exe, if you pass with --param there is no way to pass inside a value a ';'.

It seems from CommandLineOptions.cs the you can't escape them in any way (it's not documented so i read the source directly).
A regex to capture only not escaped char would solve the problem (i'm not a regex expert so i cant suggest exact one)


@rprouse commented on Fri Jul 22 2016

See our docs, there is general information there on quoting for various operating systems. You can also have multiple --param in the command line, one per variable.

I am closing this as a question, but feel free to add questions, we will see them and try to help.


@ChrisMaddock commented on Fri Jul 22 2016

@rprouse - I think this is a potential 'issue' within NUnit, not command line usage? Looks like the options will always split on semi-colon, and then throw if it's not a valid A=B format afterwards? So I don't think there is any way pass in a test parameter with a semi-colon currently. (If that's to be a supported thing to do!)


@SilverXXX commented on Fri Jul 22 2016

Yes @ChrisMaddock, i would have tried to create a pull request to show a possible solution, it actually require only a one line change and a sensible regexp, but i'm quite bad at it (a connection string is a simple example of an impossible paramater to pass right now)
Edit: and since it's all made of key=value it doesnt even return an error


@rprouse commented on Fri Jul 22 2016

Sorry, reading issues on my phone, I missed the detail 😦

insufficient info on driver reflection exception

@CharliePoole commented on Thu May 19 2016

When the nunit3 driver gets an exception in its calls to CreateObject it doesn't show the nested exception that actually caused the problem.


@CharliePoole commented on Tue May 24 2016

I created this issue based on an nunit-discuss post showing a TargetInvocationException with no info about the inner exception that actually caused the problem. However, I haven't been able to duplicate the problem. Somewhere in the code, I suspect, is a catch that rethrows in a way that loses the information. We just don't know where it is, so I'm marking this as needing confirmation with a specific example that causes the problem.


@CharliePoole commented on Tue May 24 2016

See #1509 for an example of stack trace with insufficient information.

Add project element to top-level sub-project before merging

This is a remaining issue from PR #52 which is intended to fix #30. As @CharliePoole stated in that PR,

This looks much cleaner! The remaining problem I see has to do with where MakePackageResult is called. You moved it from the individual runners to MasterTestRunner. I like the way that removes duplication, but I'm not sure it works correctly now.

Here is what I think will happen, depending on what's on the command-line:

Arguments Outcome
One assembly No project element, as expected
Multiple assemblies No project element, as expected
One project One project element, as expected
Multiple projects No project elements rather than one per project
Project plus assemblies No project elements rather than one wrapping just the project assemblies

The problem is that the project element, if any, needs to be added to each top-level sub-project before they are all merged.

That said, this is a rather esoteric use case. I'd be OK with merging this and creating a new issue for the edge cases if @rprouse agrees.

@CharliePoole is there more information you would like to add to this issue?

How to retrieve log for each Unit Test case with Pass/Fail status from Nunit console

Hi ,
How to get the pass or fail status for each test case. while running below command , it was creating a XML file and its stores all the information for each test case.
"nunit-console.exe /labels /result:console-test.xml F:\project\test\bin\Release\Application.Test.dll /framework=net-4.5".
It was printing in console like

ProcessModel: Default DomainUsage: Single
Execution Runtime: net-3.5
***** Application.Test.TestARule
***** Application.Test.TestConstructor
***** Application.Test..TestEvaluating
Tests run: 3, Errors: 0, Failures: 0, Inconclusive: 0, Time: 0.23005096791134 seconds
Not run: 0, Invalid: 0, Ignored: 0, Skipped: 0

But how to get the pass or fail status for each test case in stdout. like
ProcessModel: Default DomainUsage: Single
Execution Runtime: net-3.5
***** Application.Test.TestARule :passed
***** Application.Test.TestConstructor :passed
***** Application.Test..TestEvaluating :passed
Tests run: 3, Errors: 0, Failures: 0, Inconclusive: 0, Time: 0.23005096791134 seconds
Not run: 0, Invalid: 0, Ignored: 0, Skipped: 0

DomainManagerTests.CanCreateDomainWithApplicationBaseSpecified() fails when not 3 directories deep

@rprouse commented on Tue Dec 01 2015

Found while testing the 3.0.1 release. The 'fix' for this unit test bb2e37b2 doesn't work if the test is not at least 3 directories deep. The code is correct but the test is not, so I am not going to delay the release for this.


@rprouse commented on Tue Dec 01 2015

To elaborate a bit, I ran these tests with the command line .\nunit3-console.exe .\nunit.engine.tests.dll

If the tests are three directories deep, they pass, if not, they fail with the error

1) Error : NUnit.Engine.Services.Tests.DomainManagerTests.CanCreateDomainWithApplicationBaseSpecified
System.NullReferenceException : Object reference not set to an instance of an object.
   at NUnit.Engine.Services.Tests.DomainManagerTests.CanCreateDomainWithApplicationBaseSpecified() in D:\src\nunit\nunit\src\NUnitEngine\nunit.engine.tests\Services\DomainManagerTests.cs:line 74

If it is one level deep (in d:\tmp) or

1) Failed : NUnit.Engine.Services.Tests.DomainManagerTests.CanCreateDomainWithApplicationBaseSpecified
  PrivateBinPath
  Expected: Path matching "mp\nunit"
  But was:  "tmp\nunit"
at NUnit.Engine.Services.Tests.DomainManagerTests.CanCreateDomainWithApplicationBaseSpecified()

if it is two levels deep (d:\tmp\nunit)


@CharliePoole commented on Tue Dec 01 2015

You know, I knew that was there and figured I would get back to it. It's best to record such things.

Implement Addin model for extensions

@CharliePoole commented on Wed Aug 26 2015

The spec for Engine Addins identifies a model for more complex extensions that uses a Type derived from IAddin to actively create and register extensions, similarly to what we did in NUnit V2 addins.

We should investigate whether this is actually needed and, if so, implement it in a future release. If it isn't needed, we may identify further features that serve the same purpose of allowing the creation of more complex extensions or sets of extensions.

Invalid file type is shown in XML as type="Assembly"

@CharliePoole commented on Mon Dec 28 2015

This doesn't get noticed in the console runner, but the gui uses the type to display information about the node. When I try to open package.config, it lists Type: Assembly for it.

When the engine creates a test-suite node for a non-runnable file, it should only use type="Assembly" if the extension is .dll or .exe. For anything else, it should use some other value, possibly "Unknown".

Split Engine Extensions into separarate projects

@CharliePoole commented on Tue Jun 21 2016

It's conceivable that we will want to keep some extensions in the engine project, so the first step is to decide whether to split off all or some of them. Once this is done we can create separate issues.

Currently, we maintain five extensions:

  • nunit-project-loader
  • vs-project-loader
  • nunit-v2-result-writer
  • nunit v2 driver
  • teamcity-event-listener

The first four have been around since the beginning of NUnit 3.0 and represent fairly fundamental capabilities. They could go either way.

The Teamcity event listener is clearly something that should be separate, since it relates to a third-party project. It's probably our highest priority for splitting off.

NOTE: The VS loader is not actually product-specific, since the format is used by a number of IDEs in addition to Visual Studio.

NOTE: I'm calling this high priority with respect to the decision-making. Once that's done we can create separate issues for whatever has to be split off.


@alastairtree commented on Tue Jun 28 2016

It looks like this has been completed for the teamcity extenion and released as stable to nuget for 3.4? However the default behaviour of detecting teamcity no longer seems to work in 3.4.0, nor does the --teamcity console option. Both the docs and the -help conisole option list "--teamcity" as supported but the code suggests it is no longer. Could the docs/console help be updated to either support teamcity automatically as before or give information on how devs should use the new extension please as it seems like this will break lots of builds, mine included?

Thanks for the great library.


@CharliePoole commented on Tue Jun 28 2016

No, this has not yet happened. We did convert the teamcity code, which was previously internal, to an extension but it's still included with NUnit 3.4. Most likely you are running into issue #1623 (fixed in source) or #1626 (still in process). We will do a 3.4.1 release today or tomorrow.

If you are using the msi or one of the nuget packages that bundles common extensions, you have the teamcity extension automatically. If you are installing the NUnit.ConsoleRunner nuget package, which doesn't include extensions, then you will have to install NUnit.Extension.TeamCityEventListener as well. I'll add that info to the docs.


@CharliePoole commented on Tue Jun 28 2016

@alastairtree BTW, what this issue is about is developing framework, console, extensions, etc. separately and releasing updates on different schedules as needed. But we will still bundle collections of the latest versions in various ways for those who want it.


@CharliePoole commented on Tue Jul 19 2016

@rprouse @NikolayPianikov I'm moving ahead with creation of a separate repo for the teamcity extension. This will be a repo under the NUnit organization until we finsh all the work. Then we'll decide where it belongs.


@CharliePoole commented on Tue Jul 26 2016

The teamcity extension has been split off into issue #1702 - now ready for merge.

That leaves the four remaining "internal" extensions, built as part of the engine. We need to decide what their immediate and longer-term future should be.

The choices are basically per-extension but we shouldn't do different things for different extensions without a good reason...

  1. Continue to support these as we now do, building them as a part of the engine and distributing them with the engine as "standard" extensions.
  2. Separate the extensions into a different solution within the repo but continue to bundle them in certain packages. Note that they are already built in separate C# projects, with only a reference to the api assembly. This would simply create a greater degree of separation and would also force us to make our nunit tests run without any extensions present.
  3. Separate the extensions (or each extension) into a different repo with it's own solution and use the nunit.engine.api package to isolate them from the engine build. While we could do this and still maintain a common release schedule, it doesn't make much sense to do so. If all the standard extensions were in a single repo, then that repo could take responsibility for the packages that bundle extensions with the engine (NUnit.Runners and NUnit.Console).
  4. Possibly in combination with option 3, create a separate project for bundling the deconstructed pieces of NUnit software in various ways, based on the needs of different audiences. I'm pretty high on this as a longer-term goal and I'm considering starting it under my own GitHub account.

Please contribute your ideas here so we can set up specific steps to finish off this issue before splitting the framework and engine/console.


@ChrisMaddock commented on Wed Jul 27 2016

Charlie - can you run through what we'd be hoping to gain by splitting them up?

I think one solution per repo would be preferable when approaching a new project - whether than means we keep everything in one repo, or in many separate repos.


@CharliePoole commented on Wed Jul 27 2016

Chris - good question...

We have had an historical problem with NUnit, which arises because we started out small and grew to be OneBigProject. Things would often work in our NUnit tests, but not work for users. Sometimes, they would only work in the NUnit build itself!

With 3.0, I wanted to resolve this by ensuring that our own tests would run in an environment similar to that of our users. One aspect of this is to keep the various layers separate, both in the development stage and the testing stage.

When you are working in a solution that includes both the engine and it's extensions, it's very easy to create something that works when everything is in the same directory, or in known locations, but fails in the wild, when engine and extensions get updated at different times, are installed in different places, etc.

Further reasons for separating are to follow the principle of separating things that will be released at different times and things that will be worked on by different teams.

Here's a shot at showing the benefits of each approach... at least as I see it...

Benefit Option 1 Option 2 Option 3 Option 4
Separate release cycles No No Yes Yes
Separate teams No No Yes Yes
Reduced coupling No Some Yes Yes
Error reduction No No Yes Yes
Ease of install Yes Less Less Yes

@rprouse commented on Wed Jul 27 2016

In the short term, I think option 3 is a reasonable goal leading to a possible longer term goal of option 4.

I would like to see each extension in a separate repository with it's own version number. The engine w/ extensions NuGet package can take a dependency on each package with a version >= so it always pulls in updates. We can then only release extensions when there have been bugs reported and we can make fixes without a full NUnit release.

I also think that the extensions are likely fairly stable at this point, so hopefully they won't require much maintenance.

I think we need to pull them into the engine solution somehow though so we can continue to test the engine with extensions present. Integration testing during development if you like. We can either pull in the released NuGet packages, or CI builds.


@CharliePoole commented on Wed Jul 27 2016

I agree.

Many projects on GitHub have a separate integration testing repo - so that's something we could consider in the future as well - maybe even the fairly near-term future.

We could make it the responsibility of the extensions to test themselves with various engines - probably last release and current master. All this can be pulled in by packages. That's what we are settling on for the teamcity extension at any rate. But for the shorter term, I guess we should pull the separate extensions into the master build and have some tests for them. These would have to be integration tests though, not unit tests, because the unit tests do the same thing whether run in the extension project or the nunit project. Our V2 driver tests are an integration test - our only one for the extensions btw.


@ChrisMaddock commented on Wed Jul 27 2016

Thanks for running through it Charlie. Separate repositories sounds sensible.


@CharliePoole commented on Wed Jul 27 2016

Before moving further, I'm going to switch to the teamcity extension, which is now separated, and see if I can get it back into our NUnit.Runners package. The JetBrains guys feel that's important and we'll need a similar solution for other extensions.


@ChrisMaddock commented on Wed Jul 27 2016

Before moving further, I'm going to switch to the teamcity extension, which is now separated, and see if I can get it back into our NUnit.Runners package.

What were you planning with this? Would it make sense to have a single repo for the installer/Runners package. i.e. One place that manages "combined" distributions?


@CharliePoole commented on Wed Jul 27 2016

That's my thinking for "bundles" of things. However, that's more on the long-term side of it. Right now, I'm looking for instant gratification for the teamcity users.

What I'll probably try to do is set up the NUnit.Runners package to use the latest distribution of a set of well-known extensions >= some value for each. It's not perfect but will do for a while.

XSD for Version 3 output format

@oznetmaster commented on Sun Oct 19 2014

Is there an XSD for the 3.0 output format? If not, should there be one?


@rprouse commented on Sun Oct 19 2014

I only see an XSD for v2 in the source. There probably should be one for 3.0.

@CharliePoole, is this an oversight or were you waiting until the format stabilized?


@CharliePoole commented on Sun Oct 19 2014

Two reasons we don't have it:

  1. I'm waiting for the format to settle.
  2. I really dislike XML, and so tend to not pay enough attention to it.

Only #1 is a valid reason, of course. :-)

I imagined we would do this as a part of resolving #254, but there is no reason we can't develop the XSD earlier and modify it if we make changes. Some people might prefer to review the format through an XSD anyway.

I'm marking this for the first beta milestone, meaning that it needs to be done at least by that point.


@oznetmaster commented on Tue Oct 21 2014

I have a tool which generates an xsd from any xml file.

Given a large enough and comprehensive enough xml file, the generated xsd is pretty good.

If someone can provide a very large version 3 xml output file, I will run it through the tool, and we can see what it produces.


@CharliePoole commented on Tue Oct 21 2014

Hi Neil,

Just run the NUnit tests. Running
nunit-console.exe nunit.engine.tests.dll nunit-console.tests.dll
net-4.5/nunit.framework.tests.dll
will give you a pretty large output file.

Charlie

On Tue, Oct 21, 2014 at 3:50 AM, oznetmaster [email protected]
wrote:

I have a tool which generates an xsd from any xml file.

Given a large enough and comprehensive enough xml file, the generated xsd
is pretty good.

If someone can provide a very large version 3 xml output file, I will run
it through the tool, and we can see what it produces.

β€”
Reply to this email directly or view it on GitHub
nunit/nunit#281 (comment).


@oznetmaster commented on Tue Oct 21 2014

The problem is that those tests all run correctly. I will need a large output file that contains as many different elements as possible. Errors, failures, failed assumptions, different site failures.

Perhaps a few contributors might have 3.0 output files with these in them?


@ilya-murzinov commented on Tue Oct 21 2014

@oznetmaster I'm gonna need similar XML too for #294. Please share it with me once you have it. Or I will share it if I get it earlier :)
Thanks!


@oznetmaster commented on Tue Oct 21 2014

Will do :)

From: Ilya Murzinov [mailto:[email protected]]
Sent: Wednesday, October 22, 2014 5:48 AM
To: nunit/nunit
Cc: oznetmaster
Subject: Re: [nunit] XSD for Version 3 output format (#281)

@oznetmaster https://github.com/oznetmaster I'm gonna need similar XML too for #294 nunit/nunit#294 . Please share it with me once you have it. Or I will share it if I get it earlier :)
Thanks!

β€”
Reply to this email directly or view it on GitHub nunit/nunit#281 (comment) . https://github.com/notifications/beacon/5798625__eyJzY29wZSI6Ik5ld3NpZXM6QmVhY29uIiwiZXhwaXJlcyI6MTcyOTU0NzMwOCwiZGF0YSI6eyJpZCI6NDYyNDIxNDN9fQ==--24b50c5574eb9ce7e76a64a6147b284c3fa0f91e.gif


@CharliePoole commented on Fri Oct 24 2014

I'm wondering about the emphasis here on generating the XSD from an actual file. It's as if the XML output were some strange language, for which we don't have a key. Yet the file as it exists was designed. It contains what it contains as a result of some decisions we (principally I) made. It seems as if there should be a better way to document those decisions, then use that to create an XSD.

The decisions are embodied in the code, of course, in certain specific places I could point you to. Alternatively, doing some reading, discovered the compact form of RELAX NG. What if I simply provided that?

None of this resolves the further question I put in a separate thread: How do we get people to review the format? Can we generate web pages that document the meaning of each element in plain English?


@rprouse commented on Fri Oct 24 2014

I did a quick Google search and found XSDDoc, a freeware tool that generates HTML documents describing the XSD.

This results look like this, http://www.filigris.com/docflex-xml/xsddoc/examples/html/XMLSchema/PlainDoc.html

There are probably others, that was just the first freeware program I found.


@OsirisTerje commented on Fri Oct 24 2014

Sorry to pop in a bit late here.

I've been doing quite of lot of work with XML/XSD for miscellaneous projects. My experience is to keep the XML as a transport format (equal to the NUnit 3 idea, afaic see), and the XSD as a representation of the design - as Charlie points out. What I normally do is to represent the design first as a C# class structure, then generate the XSD from that one, which can be turned into documentation as Rob says with XSDDoc or similars (I have normally used XmlSpy (Oxygen is a similar tool). The XSD "may" have to be tweaked somewhat even from the classes, but normally gives a much better result than generating from the XML. The XSDs you get from the XML mostly are pretty wild, because there are more degrees of freedom in that direction. Multiple different XSDs can represent the same XMLs. There are different xml to xsd tools around, and they all geenrate different xsd's. The Microsoft XSD tool is one that generates some pretty heavy XSDs, compared to some off the web, which in fact generates better XSDs.

When we are satisfied that our XSD schema is the correct representation of the design, I think a tool like Rob pointed to is good enough to generate some documentation. The details are often not that interesting for review purposes (it becomes quite a lot, I doubt people want to go through all of that.), but the drawings are often easily understandable, and may create some feedback.

I do believe more people are used to XSDs than Relax NG, so - although very simple - I fear the relax ng format will simply confuse more than clarify.

Having plain English documentation together with an XSD generated drawing, and a sample XML file, I think would be the best choice.

My 5 cents .....


@CharliePoole commented on Fri Oct 24 2014

Thanks Rob, I'll take a look at this.

Charlie

On Fri, Oct 24, 2014 at 10:31 AM, Rob Prouse [email protected]
wrote:

I did a quick Google search and found XSDDoc
http://www.filigris.com/docflex-xml/xsddoc/, a freeware tool that
generates HTML documents describing the XSD.

This results look like this,
http://www.filigris.com/docflex-xml/xsddoc/examples/html/XMLSchema/PlainDoc.html

There are probably others, that was just the first freeware program I
found.

β€”
Reply to this email directly or view it on GitHub
nunit/nunit#281 (comment).


@CharliePoole commented on Fri Oct 24 2014

Hi Terje,

Well, we do have a class structure in the framework of course. Take a look
at TestResult for example. The XML representation was designed to match
that class and is created programmatically. Could we use that as a basis
even though the implementation does not use standard Xml auto-serialization?

We also need to have a class representations for the TestResult in the
runners (or maybe the engine, for use by the runners). Should we start by
creating that class, making it similar to the Framework TestResult?

Charlie

On Fri, Oct 24, 2014 at 11:13 AM, Terje Sandstrom [email protected]
wrote:

Sorry to pop in a bit late here.

I've been doing quite of lot of work with XML/XSD for miscellaneous
projects. My experience is to keep the XML as a transport format (equal to
the NUnit 3 idea, afaic see), and the XSD as a representation of the design

  • as Charlie points out. What I normally do is to represent the design
    first as a C# class structure, then generate the XSD from that one, which
    can be turned into documentation as Rob says with XSDDoc or similars (I
    have normally used XmlSpy (Oxygen is a similar tool). The XSD "may" have to
    be tweaked somewhat even from the classes, but normally gives a much better
    result than generating from the XML. The XSDs you get from the XML mostly
    are pretty wild, because there are more degrees of freedom in that
    direction. Multiple different XSDs can represent the same XMLs. There are
    different xml to xsd tools around, and they all geenrate different xsd's.
    The Microsoft XSD tool is one that generates some pretty heavy XSDs,
    compared to some off the web, whic h in fac t generates better XSDs.

When we are satisfied that our XSD schema is the correct representation of
the design, I think a tool like Rob pointed to is good enough to generate
some documentation. The details are often not that interesting for review
purposes (it becomes quite a lot, I doubt people want to go through all of
that.), but the drawings are often easily understandable, and may create
some feedback.

I do believe more people are used to XSDs than Relax NG, so - although
very simple - I fear the relax ng format will simply confuse more than
clarify.

Having plain English documentation together with an XSD generated drawing,
and a sample XML file, I think would be the best choice.

My 5 cents .....

β€”
Reply to this email directly or view it on GitHub
nunit/nunit#281 (comment).


@OsirisTerje commented on Fri Oct 24 2014

I'll have a look. The Xsd requires the properties to be public, so if that is the case it will work. There are other tools too, but the Xsd is very easy to use. I think it is always good to have a c# class as the de-facto design representation. I don't think I like XML any more than Charlie, but have been forced to work with it :-)
For the Test Result - absolutely!


@ravichandranjv commented on Thu Oct 08 2015

Unable to attach file for some reason.

The suggestion to use a tool to generate the XSD from the XML file is not so simple because there are some restrictions that need to be taken care of in the output XML file as specifying only certain values to be present for some elements.

@CharliePoole The "Description" value is missing in the TestResult format described in the github so the below XSD is still not complete..

Plus, there are many other maintenance related points like global or local change impact of the schema in future that may impact the XML validation, which should be discussed here.

This schema I have tested for only two elements - TestRun (test-run) and Environment but still would like someone to test if the schema validates correctly.

The schema does not incorporate all the attributes of other elements described in the format at github. If this schema looks like moving in the right direction, I will complete the declarations for the remaining elements.

Assumption: the root element is named TestRun (Will this cause rewrite to code? Because usage of '-' in schema declaration of element names is not a good practice) without the '-' as in the below extracted XML of the actual TestResults.xml.

The XML


<?xml version="1.0" encoding="utf-8" standalone="no"?>
<TestRun xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.nunit.org testresults.xsd">
id="2" testcasecount="2" total="2" passed="2" failed="0" inconclusive="0" skipped="0" asserts="0" start-time="2015-10-07 13:30:06Z" end-time="2015-10-07 13:30:07Z" >
<environment nunit-version="3.0.5715.30860" clr-version="4.0.30319.42000" os-version="Microsoft Windows NT 6.1.7601 Service Pack 1" platform="Win32NT" cwd="E:\nunit3_tests" machine-name="JV-PC" user="Jv" user-domain="Jv-PC" culture="en-US" uiculture="en-US" os-architecture="x86" />
</TestRun>

The XSD

<?xml version="1.0"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
 xmlns:nunit="http://www.nunit.org"
 elementFormDefault="unqualified"
 attributeFormDefault="unqualified">
<xsd:element name="TestRun" />
 <xsd:complexType name="etr">
    <xsd:sequence>
     <xsd:element ref="Environment" minOccurs="1" maxOccurs="unbounded" />
     <xsd:element ref="TestSuite" minOccurs="1" maxOccurs="unbounded" />
     <xsd:element ref="CommandLine" minOccurs="1" maxOccurs="unbounded" />
     <xsd:element ref="Settings" minOccurs="1" maxOccurs="unbounded" />
     </xsd:sequence>
 </xsd:complexType>
       <xsd:complexType name="tr">
        <xsd:attribute name="id" type="xsd:int" use="required"/>
        <xsd:attribute name="testcasecount" type="xsd:int" use="optional"/>
        <xsd:attribute name="result" type="xsd:string" use="required"/>
        <xsd:attribute name="total" type="xsd:int" use="optional"/>
        <xsd:attribute name="passed" type="xsd:int" use="required"/>
        <xsd:attribute name="failed" type="xsd:int" use="required"/>
        <xsd:attribute name="inconclusive" type="xsd:int" use="required"/>
        <xsd:attribute name="skipped" type="xsd:int" use="required"/>
        <xsd:attribute name="asserts" type="xsd:int" use="required"/>
        <xsd:attribute name="start-time" type="xsd:date" use="required"/>
        <xsd:attribute name="end-time" type="xsd:date" use="required"/>
        <xsd:attribute name="duration" type="xsd:decimal" use="required"/>
    </xsd:complexType> 
            <xsd:element name="Environment"> 
        <xsd:complexType>
            <xsd:attribute name="nunit-version" type="xsd:string" use="required"/>
            <xsd:attribute name="clr-version" type="xsd:string" use="required"/>
            <xsd:attribute name="os-version" type="xsd:string" use="required"/>
           <xsd:attribute name="platform" type="xsd:string" use="optional"/>
           <xsd:attribute name="cwd" type="xsd:string" use="required"/>
           <xsd:attribute name="machine-name" type="xsd:string" use="optional"/>
           <xsd:attribute name="user" type="xsd:string" use="required"/>
           <xsd:attribute name="user-domain" type="xsd:string" use="required"/>
           <xsd:attribute name="culture" type="xsd:string" use="required"/>
           <xsd:attribute name="uiculture" type="xsd:string" use="required"/>
           <xsd:attribute name="os-architecture" type="xsd:string" use="required"/>
        </xsd:complexType> 
            </xsd:element>
    <xsd:element name="TestSuite" type="xsd:string"/>    
    <xsd:element name="CommandLine" type="xsd:string"/>    
    <xsd:element name="Settings" type="xsd:string"/>    
</xsd:schema>


@CharliePoole commented on Thu Oct 08 2015

As you see from the milestone assigned to it, we are not working on this right now, so I'll only give a few brief comments. I don't really want to try to create an XSD until after we settle on the actual file content. I think a "friendlier" format will be better for people to comment on.

The file content is being worked as issue #254 and the current documentation of the file comment is at https://github.com/nunit/nunit/wiki/Test-Result-XML-Format We are waiting for comments!!!

Since you have gone to this trouble, however, here are a few comments to your comments. :-)

  • Generating an XSD schema from the file: We are not doing that.
  • Description is not part of the file because it's a property rather than an attribute.
  • Use of hyphen in element names: The NUnit 2.x result format has used hyphens since the year 2000. It has never appeared to cause any problems. Is there a reason you say it is not a good practice? Changing all the elements would be a significant programming change that we would be unlikely to do at this late date.

As I explained above, issue #254 is a pre-requisite to this one. We do not want to have to work on the XSD more than once and we don't actually need it to release NUnit 3.0. That's why it's in the milestone of 3.0-Extras, that is, individual bits to be worked on after release.


@ravichandranjv commented on Thu Oct 08 2015

Hi Charlie, Thanks for spending some time on XML. :)

  • Use of hyphens as not a good practice is simply in the context of the XSD element naming conventions, which follows the upper case camel convention.

You have mentioned above, "some people like to validate against a XSD..", which is the reason why I posted the XSD.

By having the XSD, people can use tools to generate a schema diagram from the XSD file, which can also serve as documentation, along with comments next to the schema boxes in the diagram plus, validate the XML file, which may be important for people who may want to further work through the XML using Xpath or XSL.

Having a beta XSD, now, however broken or incomplete, will serve its purpose in gaining some insight into the type of XML that should be generated from code.

For instance, in issue #254, the issue you have highlighted under <environment> gets resolved if we approach the XML creation through XSD standards, which says that if you wish to have multiple values in a data set for the same attribute/property of an object then use elements, so the two different or more runtime versions can be accommodated under <environment> itself rather than at the assembly level, which should gain favor because it is an OO approach, as per your stated preference there -.to have a semi-colon separated set of values under a single field.

The reason I mention it here and not in the comments of #254 is because it is related more to XSD (as and when it may get created) because values, too, can be/get constrained in XSD and if ignored, can cause errors at the time of validation, post-XML generation causing problems for report generation of the test results.

So, even if it is not priority now, having a glimpse of the kind of XSD that may be needed later helps so as to reduce any chance of a (nunit/nunit#850), ie. a schema validation related XML error that has nothing to do with NUnit usage, causing tests to fail.


@CharliePoole commented on Thu Oct 08 2015

Thanks for your comments. We will keep them in mind when we work on the
XSD. NUnit itself does not require or make any use of an XSD, so it is an
extra that we will provide for those who need it only after the release.

We have been working on the new format for the result file for over a year
and it is already being created. The documentation I created reflects what
we are actually doing. It's important to us to have this reviewed. I
understand that you are suggesting a different process, but we have a
process already under way. We are stretched too thin to add any new tasks
right now. If you would like to help, please stick to the approach we are
taking and make comments either on issue #254 or through the developer list.

On Thu, Oct 8, 2015 at 6:32 PM, Ravichandran Jv [email protected]
wrote:

Hi Charlie, Thanks for spending some time on XML. :)

  • Use of hyphens as not a good practice is simply in the context of
    the XSD element naming conventions, which follows the upper case camel
    convention.

You have mentioned above, "some people like to validate against a XSD..",
which is the reason why I posted the XSD.

By having the XSD, people can use tools to generate a schema diagram from
the XSD file, which can also serve as documentation, along with comments
next to the schema boxes in the diagram plus, validate the XML file, which
may be important for people who may want to further work through the XML
using Xpath or XSL.

Having a beta XSD, now, however broken or incomplete, will serve its
purpose in gaining some insight into the type of XML that should be
generated from code.

For instance, in issue #254 nunit/nunit#254,
the issue you have highlighted under gets resolved if we
approach the XML creation through XSD standards, which says that if you
wish to have multiple values in a data set for the same attribute/property
of an object then use elements, so the two different or more runtime
versions can be accommodated under itself rather than at
the assembly level, which should gain favor because it is an OO approach,
as per your stated preference there -.to have a semi-colon separated set of
values under a single field.

The reason I mention it here and not in the comments of #254
nunit/nunit#254 is because it is related more
to XSD (as and when it may get created) because values, too, can be/get
constrained in XSD and if ignored, can cause errors at the time of
validation, post-XML generation causing problems for report generation of
the test results.

So, even if it is not priority now, having a glimpse of the kind of XSD
that may be needed later helps so as to reduce any chance of a nasty code
change just so that a schema validation error that has nothing to with the
NUnit user, causes the tests to fail as in issue 850
nunit/nunit#850.

β€”
Reply to this email directly or view it on GitHub
nunit/nunit#281 (comment).


@ravichandranjv commented on Fri Oct 09 2015

Hi Charlie,

I am not suggesting any process change nor proposing any new tasks just highlighted some points related to XSD.

I am happy to go by your judgement/direction.


@stashell commented on Wed Jul 20 2016

Hello everyone. We are writing tool which will parse also NUnit3 test results and thus I have a few questions regarding the XSD scheme and stability of the test-results XML format design.

  • Any news about the "existence" of XSD for NUnit3 test-results ?
  • How stable is the XML test-results design currently ?
  • Do you expect many changes in the further 3.x versions ?

Thank you.

Patrik


@rprouse commented on Wed Jul 20 2016

@stashell there is currently no XSD for the test-results format, but the design is fairly stable. The XML format is part of the API between the NUnit Engine and runners including 3rd Party runners, so we are careful to not change it unless there are bugs. Any changes tend to be minor and additive. For example, we might add a new attribute or occasionally a new child element, but that is rare.

To get a good example of all of the combinations, I would suggest creating a mock test suite that has a variety of passing, failing, ignored, explicit and errors. Errors are tests that throw non-NUnit assert exceptions. I would also use properties and categories on tests and include test output using the TestContext.Write methods.


@CharliePoole commented on Wed Jul 20 2016

@stashell In addition, read the docs! https://github.com/nunit/docs/wiki/Test-Result-XML-Format

@rprouse Maybe we should try to get this done. I don't think either of us is very enthusiatic about the job. :-) Any XML lovers out there to volunteer?

Report progress from console runner (nunit3-console.exe)

@shift-evgeny commented on Fri Aug 12 2016

When running many tests using nunit3-console.exe it would be useful to see how many tests have been run and, ideally, how many remain to be run to get a rough idea of how far along the testing process is. It would also be useful to show the currently running test, so that tests which freezes or are particularly slow can be easily identified. #1139 and #1226 are useful if you have custom output that you want your tests to write, but this is about a general progress indicator for all tests. I believe NUnit 2 used to do something like this.


@rprouse commented on Fri Aug 12 2016

I would like to see progress reported also, but to do so, we need to load all of the assemblies at the start and count all the tests. We just added changes to only load assemblies as we run them because of the high memory usage otherwise. Maybe we could add progress for each test assembly?

A workaround for seeing tests as they are run is to add the --labels=All command line which will cause the test name to be output after each test finishes.


@CharliePoole commented on Fri Aug 12 2016

NUnit 2 showed progress in the console runner by displaying a single char for each test as it finished. Normally it would be '.' but an 'F' was displayed for failrues. There was no total count of tests.

@shift-evgeny Can you suggest a particular console implementation that you'd like to see? Are you suggesting we take over some lines of the console display and keep updating it?

As @rprouse points out, we just removed the requirement to load and count all tests before running any of them. Folks who have 50 or so assemblies will benefit form this so we don't want to remove it. We could show a count of assemblies and a separate count of tests within each assembly.


@shift-evgeny commented on Fri Aug 12 2016

Yes, I think showing the progress per assembly would be a good compromise. (After running a set of tests once or twice a programmer has a pretty good idea of which assemblies are the slow ones.)

Yes, I think it would be nice to keep updating the same lines - though this would have to be tested to see how it behaves in TeamCity. I imagine something like this:

MyFirstTestAssembly    [.........                                 ]  (090/200)
    CurrentlyRunningTestInMyFirstTestAssembly
MySecondTestAssembly   [....F...........E............             ]  (140/192)
    CurrentlyRunningTestInMySecondTestAssembly

F for failure as in NUnit 2 and something else for error (E or whatever NUnit 2 used to show).

Thanks for the --labels=All tip, that could be a useful workaround. What is the difference between --labels:On and --labels:Off, though? Neither of them show anything and the documentation doesn't explain this at all.


@rprouse commented on Fri Aug 12 2016

I liked the way MbUnit did progress, they took over the last line of the console and created a progress bar and % complete that constantly updated as the tests ran. We could modify that and take over two lines, the first could be a progress for the assemblies and the second for the tests? That might be hard when we are running multiple assemblies in parallel though. We also need to handle test output going above the progress bar in the output.

It also needs to work with captured output in CI systems, so it should probably be opt-in with a command line argument.


@rprouse commented on Fri Aug 12 2016

Correct me if I am wrong @CharliePoole, this is off the top of my head πŸ˜„

--labels=On will print out labels for tests that have test output, --labels=All will print out labels for all of your tests and Off turns them off. If the docs confused you, we should probably update them.


@CharliePoole commented on Fri Aug 12 2016

@rprouse Exactly. There may be a current bug in that All is printing the labels at the end of the test. I originally designed it to print at the start of the test so you knew what test was running when it hung.

Another option would be a pop-up progress window. I've seen some great download windows on linux that handle as many threads of execution as you have with a separate bar for each one.


@shift-evgeny commented on Fri Aug 12 2016

No, please, no pop-up windows from a console app!


@CharliePoole commented on Fri Aug 12 2016

Never? Not even if you typed nunit3-console my.dll --popup?

Seriously, I'd say it's a matter of taste. Some folks think that a curses-stype screen update is nasty.


@CharliePoole commented on Fri Aug 12 2016

We could do it with one line per agent, but you could have more agents than lines and then you wouldn't see anything. For a one- or two-line implementation, we could get the count of the first agents loaded right away, from the start-test message from each assembly. So it would work as expected so long as the number of assemblies was less than or equal to the number of agents specified. For more assemblies, you would appear to lose ground in the progress bar as each new assembly was loaded. Not perfect, but not the end of the world either.

Also, if we used a numeric format rather than a visual bar we could probably fit six to eight assembly counters on a single line.

Just throwing out ideas for the moment. :-)


@shift-evgeny commented on Fri Aug 12 2016

Pop-up windows are a horrible interruption to whatever the user is doing, sometimes even stealing keystrokes or mouse clicks. They would be especially bad for something like NUnit console, which a user would often want to run in a background window and maybe check on once in a while. I'd certainly never enable it.

I'm not sure how to deal with too many agents/assemblies. Is it likely that a user would have more than 30 or so agents? If they did, wouldn't they see the last 30, rather than nothing? I guess they could scroll the window up to see more. Not ideal, but I can't think of anything else at the moment.


@CharliePoole commented on Fri Aug 12 2016

In this case, however, the window would be up for the duration of the run. The user could move it elsewhere in order to keep the progress in view. they could minimize it. It could even show a summary progress state when minimized. Granted, not everyone's cup of tea.

Regarding agents, the current default is to not limit the number of agents. You get one for each assembly. I've been amazed at the number of test assemblies some users want to run together. Now that we allow loading of the assemblies as-needed, there is more motivation for the user to specify the number of agents as something like the number of cores, but that's brand new and not something users will discover easily - unless they actually read the release notes, that is!


@dybs commented on Fri Aug 12 2016

What about an option to switch between updating the progress per assembly (as discussed above, maybe --progress=summary), or showing the pass/fail/error status of a test once it completes like NUnit 2 (perhaps --progress=detailed)? Or maybe this could just be an option added to --labels=All (such as --labels=Result)? In my case, I have a couple thousand test cases I'm running through and I'd like to see in real-time which ones fail so I can investigate those cases while the remaining tests continue to run. Even though I can see which tests have executed using --labels=All, I have no idea if they passed or failed.


@CharliePoole commented on Fri Aug 12 2016

For implementation purposes, having a progress display (curses-style or windows dialog) is fundamentally different from changing the labels option. Changing what labels does is relatively minor. We would tweek some existing code. The progress display requires a few new classes. Obviously, neither is rocket science.

I like the idea of --labels=result or alternatively --labels=after. We could support Before and After, letting On be a synonym for Before. It seems to be a separate issue from this one, however, and it definitely requires different code.


@dybs commented on Fri Aug 12 2016

OK, I wasn't sure if it would be separate or not since it's still somewhat related to reporting progress. Should I create a separate issue for the --labels=result idea?


@CharliePoole commented on Fri Aug 12 2016

@dybs I think that would be best, since this solution does seem to do anything for @shift-evgeny who created this issue. :-) Also, your idea is what we usually label as "easyfix" to encourage newcomers to submit PRs.


@dybs commented on Fri Aug 12 2016

@shift-evgeny Sorry for hi-jacking your issue.


@CharliePoole commented on Fri Aug 12 2016

It helps us to keep things separated, but it also helps the folks who ask for fixes or features. Mixed issues usually get handled and prioritized based on the hardest, vaguest, most uncertain piece of work they contain and can only be assigned to somebody with a level of skill such that they can do every part. so if you have something more or less trivial, you want it to be by itself as an issue.

Mono detection on Windows broken

@CharliePoole commented on Fri Jul 01 2016

Various changes in how mono is installed on Windows over time have got us to a point where we no longer correctly detect mono installation on Windows. In particular, newer versions no longer use the Software/Novell/Mono key. We need to research this and make it work for some range of versions. Deciding what that range is should be part of this task.

Provide user interface for managing extensions

@CharliePoole commented on Sat Dec 12 2015

Users need a way to install extensions, enable and disable them and update them. This could be a separate program or a set of files that get edited.

This depends in part on #1133 which deals with the underlying mechanism for installing extensions.

Updated 18 Jan 2022 -

For the the current console runner, there seem to be three options for this...\

  1. Additional conventional locations for storing extensions.
  2. A command-line option or options for the runner itself
  3. A separate install program

I have ordered the options from easiest to hardest to implement at least at first glance. :-)

Installers Cause a SmartScreen warning

  1. Download installers from the web
  2. Attempt to install
  3. A Windows SmartScreen warning pops up preventing you from installing
  4. You must click the small More info link to get the following screen that allows you to install.

image

I think that we need to purchase a signing certificate to get around this. Maybe we just document it? I believe it will also be flagged as safe once enough people install it.

There is more information here, http://stackoverflow.com/questions/12311203/how-to-pass-the-smart-screen-on-win8-when-install-a-signed-application

Other opensource projects have had this problem. Apparently there are signing authorities that will give out certs for opensource projects. See, MonoGame/MonoGame#3189

`--domain=Multiple` fails running in the agent (separate process).

@rprouse commented on Sun Aug 14 2016

Run multiple unit tests with --domain=Multiple and it fails with,

  Overall result: System.NullReferenceException: Object reference not set to an instance of an object.
   at NUnit.Common.ColorConsoleWriter.WriteLabel(String label, Object option, ColorStyle valueStyle)
   at NUnit.Common.ColorConsoleWriter.WriteLabelLine(String label, Object option, ColorStyle valueStyle)
   at NUnit.ConsoleRunner.ResultReporter.WriteSummaryReport()
   at NUnit.ConsoleRunner.ConsoleRunner.RunTests(TestPackage package, TestFilter filter)
   at NUnit.ConsoleRunner.Program.Main(String[] args)

It works however with --inprocess --domain=Multiple. This was found in 3.4.1, we should retest with latest master as @CharliePoole may have fixed this with recent changes.


@CharliePoole commented on Sun Aug 14 2016

I think this is a repetition of #1732 except that it was originally reported using only one assembly.


@CharliePoole commented on Sun Aug 14 2016

I replicated the problem with my current master. In fact, the key difference is the use of two assemblies. I think that's because domain=Multiple is actually ignored if there is only one assembly.

While this is probably not critical in itself, it bothers me because I don't understand what is happening here. The issue now only arises when you force domain:Multiple in a situation where you are running in a separate process per assembly and there is therefore only one assembly in the domain anyway. This has always been handled gracefully before. Let me take a closer look.

Explore option bypass --where expressions

When running nunit3-console with --explore and --where "EXPRESSION" nunit3-console will bypass the --where expression.
Attached is an image showing what I'm doing to reproduce this locally.
I added 4 tests
2 with specflow (1 with a tag called "One", another with a tag "Two")
2 regular nunit tests (1 with a tag called "One", another with a tag "Two")
When running tests with --where = "cat != One" is works fine, only 1 specflow + 1 non specflow runs
When running the app with --where and --explore I see all the tests in the screen.

Note: The dll is called TestSpecflow but there are only 2 specflow tests.

explore

Upgrade Cake build to latest version

@CharliePoole commented on Thu Jun 23 2016

Currently seems to be 0.13


@ChrisMaddock commented on Thu Jun 23 2016

When this get's done, it might be nice to also make use of cake-build/cake#908. :-)


@CharliePoole commented on Thu Jun 23 2016

@ChrisMaddock I agree!


@ChrisMaddock commented on Fri Jun 24 2016
#1615 should probably be investigated before this. :-)


@ChrisMaddock commented on Tue Jul 26 2016

It's probably worth now waiting for Cake v15, which is coming soon. It will have required fixes for cake-build/cake#908, and also full error messages from the NUnit3 tool, which might make it a preferable alternative.

This might even be better done with a bit of a refresh, post-split.


@CharliePoole commented on Tue Jul 26 2016

Seems fair. BTW, I'm using Cake 0.14 for the teamcity extension and having some issues with mono 3.2.8. So we'll need to see if that gets fixed.


@ChrisMaddock commented on Tue Jul 26 2016

Good to know.

Reopening this issue as I presume closing it was accidental?


@CharliePoole commented on Tue Jul 26 2016

Yes... I keep doing that! Button should say Close Issue!

Fix for my problem is supposed to come in Cake 0.15 in a few days.

ConsoleRunner spawns too many agents

@CharliePoole commented on Thu Jul 28 2016

@Trass3r commented on Thu Jul 28 2016

packages/NUnit.ConsoleRunner.3.2.0/tools/nunit3-console.exe" [lots of *UnitTest.dlls]  --work=... --out=TestOutput.log --result=TestResult.xml --labels=On --framework=net-4.5 --dispose-runners --agents=8
  • spawns 1 nunit-agent.exe per dll even though we specified --agents=8
  • those agents are created serially which takes a really long time and tests aren't run until this creation phase finishes

@CharliePoole commented on Thu Jul 28 2016

This is working as designed, but it's possible for us to change at least the internal design.

The --agents option as defined limits the number of agents that may run simultaneously, not the total number of agents created. So, if you were to have 20 assemblies, with a limit of --agents=8 the internal sequence of execution would flow as follows:

  1. All 20 tests are loaded. This requires creation of 20 different agents.
  2. The tests are executed. NUnit starts execution in the first 8 agents. As each agent completes, another one is told to run until all 20 agents have completed execution.

As you point out, this slows things down a bit at the start. I see three things we could do to improve the situation, listed in increasing order of difficulty.

  1. We could postpone loading each test until execution begins. This would not reduce the overall amount of work but would de-serialize it significantly and allow some of the loads (12 of them in my example) to occur in parallel with the execution of other assemblies. We would still create 20 agents. Note that this approach only works for the console runner, since the gui requires all tests to be loaded in order to display them. Hence, it would need to be controlled by a package setting of some kind.
  2. We could load all the tests first, but do it in parallel using the number of agents specified. This is somewhat harder than option 1 and doesn't sound like it would give a lot of performance benefit, since the loading of tests will tend to be io-bound.
  3. We could load each test only as needed (option 1) and additionally reuse agents where possible. In the case of my example, we might manage to only create 8 agents. OTOH, depending on the bitness and framework requirements of each assembly, we might need more, maybe even up to 20. However, in most cases, I suspect we could significantly reduce the number of agents. This approach would require us to record the relevant info about each agent created and maintain a list of available agents. We would need to either ignore --dispose-runners and possibly deprecate it for the future, since NUnit would need to be in charge of the agent lifecycle. On the plus side, agents already have code to determine whether they are reusable for a given package.

My own inclination would be to implement option 1 in this issue and to create another issue for the future that would expand it to use option 3.

@nunit/core-team Any thoughts on this?


@rprouse commented on Thu Jul 28 2016

I prefer option 1. I think the complexity of option 3 reduces its ROI to a point where we may not want to do it.

From my own experience, we have around 30 large test suites at work. Starting all the agents and loading the tests puts unnecessary memory pressure on the machine. If agents didn't start until we are ready to run the tests and exited afterwards, it would greatly reduce the memory requirements and allow us to run build agents with less memory.


@Trass3r commented on Thu Jul 28 2016

I interpret 1) as agent-internal. So there would still be 20 agents running concurrently.
Couldn't you do something conceptually like

Parallel.Foreach(..., MaxDegreeOfParallelism = ...)
    StartAgent();

As @rprouse said memory consumption of all those loaded agents is also considerable.


@CharliePoole commented on Thu Jul 28 2016

Option 1 is not internal to the engine. Only option 3 requires changes to the engine.

In option 1, my example would work as follows:

  1. Spin up eight agents, load and start running tests in each of them.
  2. As each agent completes, create a new engine to lad and run tests for the next assembly.

If --dispose-runners were in use, then only 8 engines would be in existence at one time. Without --dispose-runners, the number would grow slowly from 8 to 20.

@rprouse Can you recall why we didn't make --dispose-runners the default? I am suspecting it was because we hoped to be able to re-use agents at some future time.


@Trass3r commented on Thu Jul 28 2016

Ah ok. Yes that would be perfect.


@rprouse commented on Thu Jul 28 2016

@CharliePoole I think you are right about --dispose-runners. Should we make it the default? I don't know why anyone wouldn't want to dispose of them when they are done. I always add it to large test runs.


@CharliePoole commented on Thu Jul 28 2016

We added --dispose-runners as a result of issue #308. In your initial implementation, you didn't have a command-line option, but you added it at my request. I think we could make it a default for the console runner but we should keep the option around to avoid breaking people's scripts.


@CharliePoole commented on Sat Jul 30 2016

Well, there's a little hitch in the plan. When we start a run, we are supposed to issue a start-run notification to any event listeners. That event requires a count of the number of tests to be run. It's used by the Gui to initialize the progress bar.

For the console runner, it can probably be set to zero unless somebody is depending on it.

@nunit/core-team @nunit/contributors Can you think of any problem this would cause?


@rprouse commented on Tue Aug 02 2016

I don't see an issue setting it to zero in the console runner, but we probably don't want to make it the default.


@CharliePoole commented on Tue Aug 02 2016

What I've done in general is to use just-in-time loading on an assembly by assembly basis, unless the client actually calls ITestRunner.Load or ITestRunner.Explore first. That's what any gui will have to do anyway. For the special case where ITestRunner.Run needs a count for it's start event, I check whether the package has already been loaded. If not, I use the zero. This seems to take care of everything without the need for any extra setting being passed in.


@shift-evgeny commented on Thu Aug 18 2016

I tried the latest code on master and specifying --agents=3 still runs one agent.exe per assembly (8 in my case). This is a problem, because all those agent processes continue to use a lot of memory and they seriously slow down our CI server. It will only become more of a problem as we add more assemblies.

What is the point of an agent executable continuing to run when it won't be used any more? There must be a way to limit the total number of agents running at a time (processes running, whether they're doing anything or not) and based on a description of --agents ("NUMBER of agents that may be allowed to run simultaneously") that's exactly what I'd expect it to do.


@Trass3r commented on Thu Aug 18 2016

Maybe you forgot --dispose-runners?
The fix worked fine for me last week.


@CharliePoole commented on Thu Aug 18 2016

We did used this as the change was in process. The option limits the number of agents active at any time, not the number of processed. I'm on the road and on my phone so I'll come back with a bit more explanation tonight. Seven discuss workarounds and possible future changes.


@shift-evgeny commented on Thu Aug 18 2016

OK, thanks @CharliePoole.

@Trass3r --dispose-runners didn't sound relevant to this, but I tried it anyway and it made no difference.


@rprouse commented on Thu Aug 18 2016

@CharliePoole I think I misunderstood your PR for this, or it has been broken by the subsequent change that backed out some of the changes (more likely?)

What I am seeing now with latest build of master;

Running 20 test assemblies on the command line with --agents=1 => 20 agents start up, one runs at a time, then all close at the end.

Running 20 test assemblies on the command line with --agents=1 --disposerunners => 20 agents start up, one runs at a time and closes right after it is run.

My understanding of the PR was that only X agents would start where X is specified --agents=X and --disposerunners was made the default, so I should see the same behaviour with both scenarios.

Did I misunderstand the PR, or does it look like it is broken again? Wait until you get home, no rush on this @CharliePoole πŸ˜„


@rprouse commented on Thu Aug 18 2016

My understanding was that Running 20 test assemblies on the command line with --agents=1 would start one agent, run tests, close agent, start next agent, etc. for each assembly.


@CharliePoole commented on Thu Aug 18 2016

@Trass3r Please say again which release is working OK for you.

@shift-evgeny Is the problem only with latest master or with some release?

@rprouse It was intended to work as you describe with the possible exception of making --dispose-runners the default. You and I decided that was a good idea, but I don't remember if I implemented it. In any case, something is now broken and we need to find out if what's broken was released.

Without yet having examined the code, here's what I think. When I added the checks that do just-in-time loading to AbstractTestRunner, I should have overridden it in AggregatingTestRunner. We expressly do not want to make sure that the entire package is loaded for this runner. Rather, we want to delegate that decision to the subordinate runners that it aggregates.

I'll take a look to see if that's truly the problem.

Taking it a step further, however, if anyone is expecting for us to only create a single process and re-use it, that's not happening. If that's what's wanted, we should discuss it in another issue because re-use of processes is a complicated topic and may not always be possible.

Charlie


@CharliePoole commented on Thu Aug 18 2016

I'm pretty sure it's as I described, so I'm reopening this.

@rprouse We were going to drop this change in a 3.4.2 release, but we never did one. We could still do a release after this fix is in again, or those who need it could continue to use the myget builds.


@CharliePoole commented on Thu Aug 18 2016

Actually, I need to reopen it in the other repository!

Cecil exceptions from engine

@CharliePoole commented on Sun Nov 08 2015

The engine can throw exceptions due to errors in Cecil Resolve(). I discovered this when working with the NUnit 3 VS Adapter. The errors are very hard to locate and occur when Cecil cannot resolve the item requested. The code assumes null will be returned rather than an exception being thrown.

We should find every place where we do a Resolve in the engine code and

  1. Catch it and take appropriate action
  2. Try to set up the resolver correctly so it doesn't occur.

@CharliePoole commented on Sun Dec 27 2015

Issue #1144 points to some code where we simply don't check the possible null return at all!

TypeLoadException in nunit3-console 3.0.1

@yaakov-h commented on Sun Mar 13 2016

When using NUnit.Console 3.0.1 from the NuGet gallery, I'm getting an exception when starting the runner.

I'm only seeing this behaviour on some PCs.

NUnit Console Runner 3.0.5813
Copyright (C) 2015 Charlie Poole

System.TypeLoadException: Could not load type 'NUnit.Engine.IAvailableRuntimes' from assembly 'nunit.engine.api, Version=3.0.0.0, Culture=neutral, PublicKeyToken=2638cd05610744eb'.
   at NUnit.Engine.TestEngine.Initialize()
   at NUnit.Engine.TestEngine.NUnit.Engine.ITestEngine.get_Services()
   at NUnit.ConsoleRunner.ConsoleRunner..ctor(ITestEngine engine, ConsoleOptions options, ExtendedTextWriter writer)
   at NUnit.ConsoleRunner.Program.Main(String[] args)

This issue only occurs when trying to run NUnit.Console on a PC that has NUnit 3.2 installed globally by MSI.


@rprouse commented on Mon Mar 14 2016

@CharliePoole looks like an error with our engine resolution that we didn't think of. The console is finding the newer engine in the install directory and using that. Any ideas what we can do about it?


@CharliePoole commented on Mon Mar 14 2016

Ah ha! I didn't think of this. The new engine is getting loaded. It references the api assembly that is found in its own directory. But the old api assembly is already in memory.

@yaakov-h A workaround is to upgrade the console nuget package to 3.2. You don't have to upgrade the framework if you don't want to.

A temporary fix we could make would be to only use the local engine.

A longer term fix that allows dropping in new engine releases will take some thinking.


@yaakov-h commented on Tue Mar 15 2016

Why is the old runner loading the new engine?


@CharliePoole commented on Tue Mar 15 2016

Because nunit.engine.api is designed to find the newest engine available and use it, within limits set by the client program - nunit3-console in this case. Clearly, we have a glitch, as described in my note above.


@rprouse commented on Tue Apr 12 2016

@CharliePoole, should we do a temporary fix for this for 3.2.1?


@CharliePoole commented on Tue Apr 12 2016

Good idea. It's just a matter of having the console runner specify true for privateCopy when it calls TestEngineActivator.CreateInstance. That's what Ive just done for the gui.


@rprouse commented on Wed Apr 13 2016

@CharliePoole, sounds good. I have put in the 3.2.1 milestone and assigned it to me for the temporary fix.


@knocte commented on Wed Apr 13 2016

A workaround is to upgrade the console nuget package to 3.2.

Are you sure about this? This is affecting me when simply running the test within XamarinStudio (not the console runner), and I'm already using version 3.2.

More info: https://bugzilla.xamarin.com/show_bug.cgi?id=40035


@rprouse commented on Thu Apr 14 2016

@knocte, I am pretty sure your problem is a different issue. It looks to be a problem with the XamarinStudio test runner which we did not write. I actually didn't even know it worked for NUnit 3 πŸ˜„

Looking at the source for the MonoDevelop NUnit 3 runner, they have not updated it or the engine to 3.2 yet, so it is not you that needs to update to 3.2, it is Xamarin Studio.

Looking at their runner code, they are loading v3.0.1 of the nunit.framework.dll into their test runner and directly linking to v3.0.1 of the nunit.engine. This locks Xamarin users into v3.0.1 of NUnit. Their runner code is totally incorrect. They really should have contacted us for guidance.


@rprouse commented on Fri Apr 15 2016
#1413 provides a short term fix for this by using a local engine. I am moving out of the 3.2.1 milestone.


@rprouse commented on Sat Jun 25 2016

I am going to leave this On Deck, but move out of the 3.4 milestone. The temporary fix is still in place and works. I think we need time to discuss possible fixes and work through the potential consequences of each. Until we do that, I think sticking with a local engine is safest.


@CharliePoole commented on Sat Jun 25 2016

I agree. This is really a symptom of the overall issue addressed by #1132. Should that be an Epic?

Agent's process stays in memory when it was failed to unload appdomain

@NikolayPianikov commented on Mon Jun 27 2016


@NikolayPianikov commented on Mon Jun 27 2016

From my point of view it could be critical issue


@CharliePoole commented on Mon Jun 27 2016

Is this new with 3.4?


@NikolayPianikov commented on Mon Jun 27 2016

Yes, looks like a regression from 3.2.1


@CharliePoole commented on Mon Jun 27 2016

This is a bit strange. The code at https://github.com/nunit/nunit/blob/master/src/NUnitEngine/nunit.engine/Services/DomainManager.cs#L153 throws an exception if the domain unload throws or if it times out. I have run under console and verified that the exception comes back to the console and is displayed. Maybe the Join is timing out and the Thread.Abort is hanging.


@NikolayPianikov commented on Tue Jun 28 2016

Anyway we should terminate all related agents when the console's process was finished


@CharliePoole commented on Tue Jun 28 2016

Certainly... but we have to figure out how to do that. :-) The code I pointed to is part of the termination and it did change in 3.4.

Ideas:

  • Look at what changed in 3.4. Of course the change was intended to solve a problem, so we will have to make sure that still gets solved.
  • Use code from https://github.com/nunit/nunit/blob/master/src/NUnitFramework/framework/Internal/ThreadUtility.cs#L55 which may do a better job of killing the thread.
  • Agent termination has always been cooperative - that is, we tell it to terminate and it does. We could add some code to TestAgency to actually kill the process if it doesn't terminate within a certain time.

What do you think?


@NikolayPianikov commented on Tue Jun 28 2016

Ok. I will try to find.


@rprouse commented on Tue Jun 28 2016

I need to look at the code, but a timeout in TestAgency and then killing the process seems like the right place.

Better would be identifying why it is happening and fix that ;)


@CharliePoole commented on Tue Jun 28 2016

I reviewed the 3.4 changes and don't see anything that should cause this. Moving on to the other bullet points.


@CharliePoole commented on Thu Jun 30 2016

@NikolayPianikov We really need to get the release out, so we are going with the changes we have so far. We can follow up further on this problem if it continues.


@NikolayPianikov commented on Thu Jun 30 2016

Issue was reproduced on "master", see http://win10nik.cloudapp.net/viewLog.html?buildId=94&buildTypeId=NUnit_NUnit3IntegrationTests&tab=buildResultsDiv


@NikolayPianikov commented on Thu Jun 30 2016

User creates new AppDomain and does not finish thread there. We can reproduce it using mocks.zip

internal class UnloadingDomainUtil
{
    public static void Create()
    {
        var newDomain = System.AppDomain.CreateDomain(System.Guid.NewGuid().ToString(), System.AppDomain.CurrentDomain.Evidence);
        newDomain.CreateInstanceFrom(typeof(UnloadingDomainUtil).Assembly.Location, typeof(UnloadingDomainUtil).FullName);
    }

    public UnloadingDomainUtil()
    {
        new System.Threading.Thread(() => { while(true); }).Start();
    }   
}


[Test]
        public void MyTest()
        {
            UnloadingDomainUtil.Create();
        }

@CharliePoole commented on Thu Jun 30 2016

Can you run it using the release/3.4.1 branch?


@NikolayPianikov commented on Thu Jun 30 2016

One moment


@CharliePoole commented on Thu Jun 30 2016

And you say we were successfully terminating such a case using 3.2.1?

Since this is a user-specific situation, caused by a bad test it could be postponed. The most critical issue is that 3.4 won't run the teamcity extension at all! I can only give this a very short timebox before I do the 3.4.1 release without it.


@NikolayPianikov commented on Thu Jun 30 2016

It is reproduced on 3.2.1 too. May be it is a bit another case (we have an exception "Error while unloading appdomain"), but it could have a similar solution.


@CharliePoole commented on Thu Jun 30 2016

When I add your test to a console run, the only effect is a brief pause before it terminates. This works if I run in process or in a separate process.

I'm going to move ahead with 3.4.1 as it is. After it's out, we can try to address this if it's still a problem.


@tfabris commented on Tue Aug 23 2016

Regarding Nunit issue #1628, "Agent's process stays in memory when it was failed to unload appdomain":

This issue is a blocking issue for us at the moment, due to the fact that we recently converted our company's tests over from Nunit 2.6.4 to Nunit 3.4.1. There are some details about the issue which were not revealed on the original bug report and discussion. We have some new information about this bug. We would like to request that the bug be re-opened because it is a blocking issue.

New details:

  • Bug definitely did not occur in Nunit 2.6.4.

  • Bug occurs in 3.4.1 but we do not know in which build it was introduced.

  • Bug occurs only on certain test DLLs which induce the situation which cause the message to appear: "Unable to unload AppDomain, Unload thread timed out."

  • Important: Bug only occurs when the output of the console is redirected. For example, in test automation situations where console output is captured to memory, such as when Team City is running the Nunit Console runner to execute tests. This bug causes Team City to hang and not complete the build. For ease of repro, you can also reproduce in a powershell script which redirects console output.

  • Important: Bug does not occur when you run the Nunit Console at a plain console (output is not redirected).

  • Example PowerShell command which does NOT cause a hang (bug still occurs, but the output appears normally and drops back to powershell prompt):

    & nunit3-console.exe MyBadTest.dll --x86 --framework=net-4.0

  • Exmple PowerShell command which DOES INDUCE the hang because it is redirecting the output (this is the same thing that happens to Team City):

    $result = & nunit3-console.exe MyBadTest.dll --x86 --framework=net-4.0
    write-host "$result"

  • If you watch the Windows Task Manager in the two cases above, you see that there is a launch of Nunit3-Console.exe and one or more Nunit-agent-x86.exe instances. Only the Nunit3-Console.exe closes at the end of the test, the agent instance(s) remain open and do not close. When there is no console redirection, control returns to the console, but when there is redirection, control does not return to the console until the agent process is forcefully terminated.

  • This is a blocking issue for any instances where Nunit Console is being launched by any type of automation, because it causes the automation to hang, and fail to process any more commands. The test runner should close down when it is done, regardless of whether the tests had an issue unloading the appdomain or not.


@tfabris commented on Tue Aug 23 2016

Clarifying my last message:

  • The same bug occurs whether or not you redirect the console output: The nunit agent remains in memory and does not leave.
  • However, you only notice a problem (it's only a blocking issue) in situations where the console output is redirected, such as part of an automation system. That is the situation in which the automation hangs because of the Nunit agent which does not close.

@CharliePoole commented on Tue Aug 23 2016

Some work is ongoing in this area. I'll reopen this and we can test when the changes are in.


@tfabris commented on Tue Aug 23 2016

Sweet. We have an easy repro case, so let me know if you want me to verify a fixed build.


@CharliePoole commented on Sat Aug 27 2016

@tfabris Can you test this against the latest MyGet build? It's at https://www.myget.org/feed/Packages/nunit and the latest version at the moment is 3.5.0-dev-03158.


@tfabris commented on Mon Aug 29 2016

I'm very interested in trying out this build of Nunit against our repro case.

However I am unable to obtain it at the link you provided. Following the link brought me to a signup page for the MyGet service. I had never used MyGet before, and, even after I set up an account there and created a feed, the link you provided ( https://www.myget.org/feed/Packages/nunit ) still does not result in me obtaining the file; it redirects me to a MyGet help/config page. Even after I made sure my profile was filled out and I created a "Feed" in MyGet and tried to add "Nunit" to my feed, same thing: I can't obtain this new build of Nunit that you're trying to link me to. The version of Nunit that is gives me in MyGet is 3.4.1 and I don't see any option to have it give me work-in-progress builds.

Do you have any tips on how to obtain this 3.5.0 dev build?

Thanks!


@CharliePoole commented on Mon Aug 29 2016

@tfabris Sorry - I guess that's only a link for our account to use. There is a Gallery feature in MyGet, but we have not set that up for nunit yet.

Use the following url in Visual Studio (under Nuget sources) to get our myget feed:
https://www.myget.org/F/nunit/api/v2

Alternatively, you can grab the build from AppVeyor:
https://ci.appveyor.com/project/CharliePoole/nunit-console/build/3.5.0-dev-03158/artifacts


@tfabris commented on Mon Aug 29 2016

Thanks so much, Charlie!!!!

I obtained the package from AppVeyor: package\NUnit.ConsoleRunner.3.5.0-dev-03158.nupkg

I extracted it and ran the console runner contained within that package at the command prompt, pointing it to our repro DLL.

Result: Same problem. Steps:

  • When the tests in our repro test DLL are complete and all tests have passed, there is the final report of all test scores at the console. All tests pass. Then, after the test scores are displayed, the error message "Unable to unload AppDomain, Unload thread timed out" appears on the console (this is the bad condition that the test DLL must induce in order for the problem below to repro).
  • At that point in time, "Nunit3-Console.exe" closes and is no longer in memory, as expected. However "nunit-agent-x86" remains in memory and does not leave memory. (This is the bug.)
  • If you are running directly at the DOS console, then control returns to the console, despite the agent remaining in memory.
  • If you are redirecting the console output of the Nunit runner (for example, if you are running a script which pipes the output somewhere else for capture/logging, or, you are running in Team City which pipes the output into memory for capture/logging), then the progress hangs and control is not returned to the calling program. It waits forever and does not return. If your test is running in Team City, the the build hangs forever and never finishes. (This is the most critical failure mode of the bug as it blocks production.)

Details, from an analysis by our escalation developer, using the Nunit 3.4.1 build:

The process nunit-agent-x86.exe is kept alive because its main thread is waiting forever for a manual Event to be set. This is the call stack of the main thread:

OS Thread Id: 0x6ff0 (0)
Child SP IP Call Site
00afede8 7745718c [HelperMethodFrame_1OBJ: 00afede8] System.Threading.WaitHandle.WaitOneNative(System.Runtime.InteropServices.SafeHandle, UInt32, Boolean, Boolean)
00afeecc 723a49d1 System.Threading.WaitHandle.InternalWaitOne(System.Runtime.InteropServices.SafeHandle, Int64, Boolean, Boolean)
00afeee4 723a4998 System.Threading.WaitHandle.WaitOne(Int32, Boolean) <<< Waits forever here…
00afeef8 723a496e System.Threading.WaitHandle.WaitOne()
00afef00 0539707e NUnit.Engine.Agents.RemoteTestAgent.WaitForStop()
00afef08 00d2173e NUnit.Agent.NUnitTestAgent.Main(System.String[]) [C:\Users\xxxxxxxxxxx\AppData\Local\JetBrains\Shared\v04\DecompilerCache\decompiler\9E876AE6-521E-4A30-9583-EE5D668CC2CC\76\160e3076\NUnitTestAgent.cs @ 92]
00aff0e8 73301376 [GCFrame: 00aff0e8]

This is the NUnit source code relative to the methods above:

public void WaitForStop()
{
this.stopSignal.WaitOne();
}


@CharliePoole commented on Mon Aug 29 2016

This issue should have been moved to the new nunit-console repository when we split repos. Doing so now.

TempResourceFile.Dispose causes run to hang

@CharliePoole commented on Tue Sep 29 2015

This Dispose method contains an infinite loop, exited via a break statement. Sometimes, when all other threads are also waiting, the exit requirements are never met.

Someone needs to examine the reasons behind the logic that is there and modify it so it doesn't hang.


@CharliePoole commented on Tue Sep 29 2015

I've noticed that this is often a problem when running a non-parallelizable STA test. Makes sense.

Engine modifies TestPackage

@CharliePoole commented on Thu Jan 07 2016

In some situations, when project files are in use, the engine modifies any test package provided to it. This can cause unexpected side effects for the runner calling into the engine. See nunit/nunit-gui#69 for example.

We should review the code and, if possible, avoid modifying the original copy of a test package provided to the engine.

--domain=None does not work

@rprouse commented on Sun Aug 14 2016

Tested in 3.4.1, should retest in master.

Run tests with the command line,

--domain=None or --inprocess --domain=None fails with Could not load file or assembly 'nunit.framework' or one of its dependencies. The system cannot find the file specified. I am running using the installed console from the test project bin directory.

Ideally, the console is in a different directory and doesn't have a copy of the framework sitting in the build directory. Our build output isn't the best test for this. I tested with an installed version of the console.


@CharliePoole commented on Sun Aug 14 2016

Domain=None has only ever worked if the test assembly and all nunit assemblies are in the same directory. I talked about removing it but there are a few odd uses for testing code that can only ever run in a primary AppDomain.


@jcansdale commented on Sun Aug 14 2016

I use it: πŸ˜‰
https://github.com/nunit/nunit3-tdnet-adapter/blob/master/src/NUnitTDNet.Adapter/EngineTestRunner.cs#L51

The NUnit assemblies will loaded into the LoadFrom context (rather than being in the same directory).


@CharliePoole commented on Sun Aug 14 2016

@jcansdale Unless one of our contributors slipped something past us :-) NUnit doesn't use LoadFrom except in the presence of a special package setting that is not accessible to users. Is that what you are using? If you are, it's a little risky, since it's not a published interface. We can look into this more closely if you like.

@rprouse Is domain=None failing in cases where you have used it successfully before?


@jcansdale commented on Sun Aug 14 2016

It's only when I create a TestPackage object directly that I use the AddSetting("DomainUsage", "None") setting. It's my code that resolves the NUnit assemblies and loads them using LoadFrom. I don't actually use this setting with the NUnit console (which I realize this issue is about). Sorry. πŸ˜„


@CharliePoole commented on Sun Aug 14 2016

So, what I imagine happens is that NUnit attempts to load the assemblies and finds the copy you already loaded - I'm slightly surprised at that, since they are in the LoadFrom context.

There is actually a "secret" setting that will cause NUnit to create an AssemblyResolver that will do a LoadFrom - search for ImageRequiresDefaultAppDomainResolver.


@CharliePoole commented on Sun Aug 14 2016

@rprouse I agree there's nothing critical about this one. I was surprised you could run domain=none, but I realized it will work if you are in the nunit project and running in the bin directory. We should really address what this setting does and what we want it to do. The defects can either be fixed or documented as far as I'm concerned.

@jcansdale's use is actually much safer than use from the command-line. He specifies it in the package because he has already created the appdomain and doesn't want a new one.


@rprouse commented on Mon Aug 15 2016

I don't think we can do much with domain=None to fix it. There are so many ways that it could go wrong with multiple assemblies referencing different versions of the same assembly, etc. We could document it, but if we do, we should probably add a warning to the command line help too.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.