Coder Social home page Coder Social logo

tcunit / tcunit Goto Github PK

View Code? Open in Web Editor NEW
224.0 30.0 69.0 5.08 MB

An unit testing framework for Beckhoff's TwinCAT 3

License: Other

C# 100.00%
tcunit twincat test-framework beckhoff unit-testing unit-test plc industrial-automation twincat3

tcunit's Introduction

TcUnit logo GitHub license Open Source? Yes!

TcUnit - TwinCAT unit testing framework

Welcome to the documentation TcUnit - an xUnit testing framework for Beckhoff's TwinCAT 3.

Main documentation site is available on:
www.tcunit.org

What is test driven development and unit testing?
Familiarize yourself with the basic concepts and specifics for TcUnit.

Want to know how to get started?
Read the user guide.

Want to see a more advanced programming example?
Read the programming example.

Want to download a precompiled version of the library?
Go to the releases.

Want to include TcUnit tests into a CI/CD pipeline?
Check out the TcUnit-Runner project.

Want to contribute to the project?
That's fantastic! There are two ways to do this.

  1. Contribute with your time and knowledge by fixing issues or adding new features. Please read the CONTRIBUTING first.
  2. By becoming a sponsor.

Have any questions? Found a bug or want to discuss an idea?
Check the F.A.Q. Check the open and closed issues. If your issue does not already exist, create a new. For general ideas/discussions, use the discussions.

tcunit's People

Contributors

aliazzzz avatar beidendorfer avatar claytonketner avatar davidhopkinsfbr avatar dfreiberger avatar i-campbell avatar joergwitt avatar kubao avatar omegacore avatar oswin02 avatar roald87 avatar rogerchristopher avatar sagatowski avatar sebvc avatar slacawallace avatar stefanbesler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tcunit's Issues

restarting tests after completion

Hi,

I have the following scenario;
While writing new testcases I frequently want to verify the new tests.
I understand that the framework is built to just "run once", but this is kind of a dilemma when developing new tests. Is there an easy way to rerun the tests without (cold)resetting the controller? I would like to hear your thoughts on this.

With kind regards,

Aliazzz

God mode functions for overwriting protected variables

After writing many complicated tests for my company's codebase (phew), I think TcUnit could use a couple of these helper functions. In a number of scenarios, twincat won't let you write directly to certain variables:

  • Due to access restrictions (e.g. a variable in a FB's VAR)
  • The variable being set as I/O (i.e. AT %I* or AT %Q*)
  • Probably other scenarios, too

Writing to these variables wouldn't make sense and should be prevented in the normal PLC code, so having special privileges during testing is a must. If you try to write to these protected variables like:

TEST('foo');
myFB.someProtectedVariable := 5;
...

You'll get a compiler error saying 'someProtectedVariable' is no input of 'MY_FB'.

In my experience, this is most important for writing to I/O variables and for disabling timers (by setting PT to zero). This issue can be worked around by using pointers. In my PLC projects, I've added helper functions like WRITE_ANY_INT or WRITE_ANY_TIME, which all do the same general thing:

VAR_INPUT
    ptr: POINTER TO INT;  // Pointer to the protected variable
    val: INT;
END_VAR
===================================
ptr^ := val;

Using this function allows you to bypass the compiler warning (and the runtime has no issue executing it) so that you can make your code more "testable". Ex:

WRITE_ANY_BOOL(ADR(myFB.rawInput), TRUE);
or
WRITE_ANY_TIME(ADR(myFB.setpointDelayTime), T#0MS);

Having these helpers in TcUnit would also be nice because then I (read: others) wouldn't be able to access them in my projects that use the libraries being tested. Speaking of which...

Hide Reference

I noticed that your website and this repo don't mention the Hide reference option for referenced libraries (or at least I couldn't find it). This option lets you hide TcUnit from your other projects. For example:

  • You write a library named MyLibrary, which has tests written in TcUnit
  • You make a PLC project MyProject, which references MyLibrary

If you use Hide reference on TcUnit in MyLibrary, then TcUnit won't show up in the imports list of MyProject. You can find it in the Properties tab:
image

Thanks again for the continued work and upkeep on this library!

Usage of FB_ADSAssertMessageFormatter.LogAssertFailure should warn

The usage of FB_ADSAssertMessageFormatter.LogAssertFailure should warn if the character-sum-countof:
FinalMessage + Message
is more than 255.
Beckhoffs Tc2_System.ADSLOGSTR does not give any warnings (nor a returnvalue other than 0) if it fails to print it. It just simply discards the message and does not print it.

Additional tests for 2D/3D REAL/LREAL arrays

As described in pull-request tcunit/TcUnit-Verifier#3, there are some additional tests necessary to fully verify that the Assert-functionality for the 2D/3D REAL/LREAL arrays are fully working. Pull-request has been closed so the remaining tests are going to be handled through this issue.

I'll copy and paste the text from the PR here:

I've reviewed the code now and have some comments:
Some tests don't have complete arrays, many of them are just filled up with zeroes so it's just a zero to zero comparison which is not optimal.
See for example:
Test_LREAL_Array2d_DifferInSize
which has
a : ARRAY[-4..3, -1..2] OF LREAL := [64, 98, -32768, 32767, 5478, -378, 42, 6234];

This only fills upp a small amount of the 2D-array. Same thing goes for the 3D-arrays.

Could you update the tests with values where they are filled up?
I'll commit my changes and then I'll leave this PR open so you can do a new fork on the new changes and add these changes. Once this is done I'll close the PR.

I also think more tests are necessary for the 2D and 3D-arrays to proove that the method that asserts these take the x,y,z into consideration when checking whether the content is different. This declaratation and single test is not enough:

METHOD PRIVATE Test_LREAL_Array3d_DifferInContent
VAR
    a : ARRAY[-5..-4,-1..0,0..1] OF LREAL := [42, -23, -32768];
    b : ARRAY[1..2,4..5,6..7] OF LREAL := [42, 24, -32768];
END_VAR

First it doesn't even fill the whole array, second it's only a single test that doesn't take into consideration all the different variants of differences (considering the fact that the assert-method that does this check actually iterates through all three dimensions)

Remove duplicate code

In several of the assert-methods there is quite a lot of code that is duplicated.
Clean that up so it's using common functions/function blocks for these parts.

add test progress feedback in the log

I'd like to see a feature wherein the user gets feedback in between tests on the testrunner progress: it could send a message to the log stating progress in % or in test x out of y for every new test.
Tradeoff is some extra performance and clutter in the log, but at this stage the user gets no progress indication at all (only errors, if any), apart from checking the testrunner state in online mode.
I think this progress feedback is especially essential in large unittests ( engage FB's with timers etc) As personal usecase involves this a lot, as it's really annoying to guess what the current progress is.

A test running over >= 2 cycles considered double

If a test is required to run over two cycles (or more), then that test is by the test framework considered to be a double, that is it is considered to be reported with the same name twice (and is thus reported as such).

A test can only be considered to be double if it is called with the same name in the same cycle, otherwise it is not double.

Doing assert two times or more in different cycles for arrays

If you do a (failing) assert for standard data types in two different cycles, the framework takes care of this and only prints one of them.

If the same is done for an array, then both of them are printed (actually, one gets printed for every cycle the assert is called).

The behavior should be the same for both arrays and non-arrays.

ReportResult in FB_AssertResultStatic should take expected/actual value into consideration

The current solution of reporting asserts is not bullet-proof. Although the code says it checks for the complete combination of:

  • Test message
  • Test instance path
  • Expected value
  • Actual value
    It actually only checks for the first instance of test message/test instance path AND THEN checks if the expected value and actual value are according to the result as well. What the reporting should do is to check if the combination of all four exist.

For arrays it needs to be done differently. Here we need to replace the expected & actual value with the size of both arrays and the types of both arrays.

Question: Is it possible to add a identifier to every assert? If we assume that every assert is called at every cycle, we can create an index for every assert in every test suite.

Rename GVL_Constants

If you have a project that is using TcUnit and that is using the TE1200 static code analysis tool from Beckhoff, it will issue an warning "SA0027: Object name 'GVL_Constants' already used in Library 'tcunit, 0.4.1.0 (www.tcunit.org)'".
GVL_Constants is quite a common name to use for the constant GVLs, so it would be better if the constant-gvl would be renamed to something like GVL_Constants_TcUnit, so no name crashes occur.

Add functionality to disable a test

Under certain circumstances it might be necessary to temporarily disable a test, so that it is not being executed (or not being taken into consideration for the results). Add a functionality so that it's possible to "mark" a test as disabled.

Ambiguity of method name TEST_FINISHED_NAME

I tried out the TEST_FINISHED_NAME function and found out that I completely misunderstood its meaning. I thought this function was meant to check if a certain test had finished or not. However, it sets a certain test as finished.

It would be a good idea to change the name to something clearer such as SET_TEST_FINISHED.

Also I made a new function called HAS_TEST_FINISHED, which does what I initially thought the TEST_FINISHED_NAME would do: it checks if a certain test in the current test suite has finished.

I could open up a PR with this new function if you want.

File Read for unit testing

I was able to read the .csv file, loaded Arrays Input_v_Data and OUT_Sollwert_Expected_Data during Simulation.
when i am unit testing the Function_Block using TcUnit, i think, data is not read from csv file and Input_v_Data, OUT_Sollwert_Expected_Data always 0.

Question is: When unit testing using TcUnit, will files be read?

fb_TextModeRead(
	bRead:= TRUE,
	sFileName:=sFilePath,
	database=>OUT_database,
	bFileReadCompleted=>bIsFileReadCompleted);

fb_DreieckTest(
                IN_Start:=,
		IN_ZykZeit:= Input_ZykZeit,
		IN_v:= Input_v_Data[nIndex],
		OUT_Sollwert=>OUT_Sollwert_Result,
		OUT_Endlage=>OUT_Endlage_Result);		
		
AssertEquals_REAL(
               Expected := OUT_Sollwert_Expected_Data[nIndex],
	       Actual := OUT_Sollwert_Result,
	       Delta := Delta_Value,
	       Message := 'Test $'failed at $'OUT_Sollwert$'');

Line its executes inside fb_DreieckTest is :
OUT_Sollwert:= OUT_Sollwert + (IN_v * IN_ZykZeit);

Test Result

FAILED TEST 'PRG_TEST.fb_DreieckTest_Test@Test', EXP: 0.0, ACT: 6.0, MSG: Test 'failed at 'OUT_Sollwert' 
FAILED TEST 'PRG_TEST.fb_DreieckTest_Test@Test', EXP: 0.0, ACT: 7.0, MSG: Test 'failed at 'OUT_Sollwert' 

Unifying visual studio versions across projects

In the CONTRIBUTING.md you mention that we should use the same Visual Studio version number as the original solution file. I found that there are three different visual studio versions used:

  • TcUnit (Visual Studio 2013)
  • TcUnit-Verifier_TwinCAT (Visual Studio 14 = Visual Studio 2015)
  • TcUnit-Verifier (Visual Studio 15 = Visual Studio 2017)

I happen to have them all installed. Nevertheless it would be a good idea to use one version for all projects.

Only do AssertResults.ReportResult if Expect and Actual differs

In all the different assert-methods, the method AssertResults.ReportResult() is called independently of whether Expected is equal to Actual or not. This is unecessary, because we have that method in order for us to know whether we are should report (print) the assert or not, which we only do in case they differ. Basically, we don't need to do anything if Expected = Actual.

Boilerplate reduction and using a TEST_FINISHED method

Hi Jakob - thanks for the awesome library! I've been using it for a few days and noted a few minor things. I've implemented these on my machine and can put up a fork if these sound good to you, although I could very well not be seeing some edge cases that this wouldn't work for.

Boilerplate Reduction

There are a few duplication/boilerplate pain points around making tests. When I create a new test FB, I need to remember to add a bunch of things, like:

  • call_after_init pragma
  • EXTENDS and IMPLEMENTS inheritance
  • An instance of FB_Assert
  • Implementing the interface (creating the RunTests method)
  • Creating FB_init and filling it with the required method call

I understand if some things are necessary, but it would be really nice to be able to reduce this list down. Is the RunTests method necessary? Couldn't we just have the test runner execute the body of the function block instead? (This is done by replacing variables that are of the I_TestSuite type with POINTER TO FB_TestSuite) Removing this requirement would also prevent the requirement of adding the IMPLEMENTS .... FB_TestSuite could also contain an instance of FB_Assert already.

After some changes to TcUnit, I was able to get the boilerplate for each new test suite FB down to:

  • Add {attribute 'call_after_init'}
  • Add the inheritance EXTENDS FB_TestSuite

TEST_FINISHED Method

It seems like it would make more sense to have a method to mark a test as finished, rather than using the return value of RunTests. This would be especially helpful for when a test requires multiple cycles to complete. E.g.:

METHOD Test1
---------------------------------
TEST('Test1');

// Test does things

IF endCondition THEN
    TEST_FINISHED();
END_IF

Investigate if MEMCPY should be used in F_AnyToUnionValue

In the free function F_AnyToUnionValue() there is a lot of bit-shifting action going on, in order to locate all the bits in the right place from an ANY-value to an union value. Couldn't each instance of the shift be replaced by a simple memcpy? Does byte-order matter in this case?

Export results in Xunit XML format

Connecting also to #7 it would be nice if the framework could output the results in standard Xunit XML format. This way other integration software (ie. Jenkins you mention) can then easily manage and display the results (ie. for Jenkins with the Junit Jenkins plugin).

I believe this pointers can be useful for the format:
https://gist.github.com/erikd/4192748
https://gist.github.com/nil4/7a3cd9c23835ec6b126fe588e836a2e8
https://github.com/windyroad/JUnit-Schema

Integration to Jenkins/build automation server

Today it's only possible to run the tests locally on a developer/engineering machine. In large projects this is usually not optimal, as the amount of source code can be overwhelmingly large.

In the sphere of continous integration and continous delivery there needs to be a more automated way to run the tests. One solution/suggestion is to write a program/script that automatically runs the tests on the build-server using a combination of the Microsoft Visual Studio development tools environment plus the TwinCAT automation interface.

Change methods GetAmount... to GetNumber...

Before you freeze the API method names it might be good to change the method names which start with GetAmount... to GetNumber.... From the Cambridge dictionary:

We use amount of with uncountable nouns. Number of is used with countable nouns:

So GetAmountOfFailedTests then becomes GetNumberOfFailedTests.

F_IsAnyEqualToUnionValue comparison of REAL/LREAL

In F_IsAnyEqualToUnionValue, in the enumeration for TYPE_LREAL and TYPE_REAL a comparison is done between two possible REALs/LREALs. This should most likely take the delta into consideration as well.

Different prints depending on if using AssertEquals(ANY) or AssertEquals_DataType

If I for example declare two variables with the data type boolean:
testVar1 : BOOL;
testVar2 : BOOL;

And assert that these two are the same, I get different output depending on whether I use:
Assert.AssertEquals_BOOL(testVar1, testVar2) versus
Assert.AssertEquals(testVar1, testVar2)

The variant that takes ANY should use the underlying "primitive data type"-versions of the assert methods if the input is a primitive data type.

Too many ADSLOGSTR() causes some to become lost

As discovered in issue #26, doing many ADSLOGSTR(), it seems that the AMS-router looses some of them. For instance, running the TcUnit-Verifier makes it report different results (in terms of how many error-messages are shown) with every run!

This can (hopefully) be solved by creating a queue out of each ADSLOGSTR()-message, and consume them one-by-one with a timer inbetween, basically a buffer.

Add rising-edge triggers for certain messages

If a test is defined with a name that is already used in that test suite, and that test suite does not return
FINISHED in the same cycle, then multiple ADS-messages will be created for that error (because we are calling the TEST('testname')-method several times). This if course is not good as we will send several messages for the same error. Suggestions is to inside FB_TestSuite change this line:
TestNames : ARRAY[1..100] OF Tc2_System.T_MaxString;
Instead of this array being an array of strings, it should be an array of structs (or function blocks), where one of the fields should be a R_TRIG, and if Q is TRUE, then (and only then) should we send a message notification. Suggestion is to also move the TestFailed-boolean (which is a separate array) into this structure/function block.

TcUnit returns erroneous results with 4020.32

When running the example project (simple) with 4020.32 the following result is returned:

==========TESTS FINISHED RUNNING==========
Test suites: 0
Tests: 0
Successful tests: 0
Failed tests : 0

Obviously this is not correct, and for some reason it seems the framework returns the wrong result with 4020.32 (or possibly even all of 4020)?

Multiple assert instances in one test suite causes issues

Instantiating >1 instances of FB_Assert in your test suite causes incorrect test pass/fail reporting, depending on the order they are instantiated. Any instances after the first will report incorrectly, meaning, if the assert fails:

  • The test will still show in the final test summary as passing
  • The instance path in the log output is empty: FAILED TEST '@testName', ...

Granted, this is a pretty obscure case - I doubt anyone would purposefully create two instances of assert, but it does create a confusing output.

Add functionality to make it possible to ASSERT arrays

It's currently only possible to do assertions on the primitive data types of IEC61131-3 (incl. the ANY-type), but it would be really good if it would be possible to also do assertions of arrays (of the various types).

Replacing GVL_Constants_TcUnit for a ParameterList

Hi there

I'd like to suggest a small 'upgrade' to replace the GVL_Constants_TcUnit with a parameter list.
The advantage of a parameter list is that the parameterlist is still part of the Libray while parameter list values can still be changed if the user wishes (!)

Examples of "parameter list" usage;
https://stefanhenneken.wordpress.com/2017/08/08/iec-61131-3-parameterbergabe-per-parameterliste/
And in my own work;
https://forge.codesys.com/lib/debuglogger/home/Home/
(see subject Change the default Ringbuffer size)

Remove unused code

Removed unused code, such as:
METHOD PUBLIC LogMessageError in FB_ADSAssertMessageFormatter

implicit typecasting throws errors + solution

Hi,

I found that certain runtimes (not all) produce the following code exceptions on two FB__Assert methods. Especially "SetTestFailed" and "FINDTestSuiteInstancePath" are affected.

The problem lies in DWORD_TO_ULINT( ADR(This^)).

The used typecaste conversion is not very clean. Mixing adresses and integer value's is always a bad idea. The compiler gives a warning (a hint! ) when the library is compiled: possibly cannot convert FB_Assert to a DWORD. An address is simply no DWORD as it depends on the compiler of the target platform if the solution is tolerant enough. TwinCAT runtimes are also affected by this phenomenon as Beckhoff uses mixed CPU architectures with underlying compilers by Codesys (ARM/x86/x64/PowerPC etc).

// Original code METHOD PRIVATE SetTestFailed VAR Counter : UINT; END_VAR FOR Counter := 1 TO GVL.AmountOfInitializedTestSuites BY 1 DO IF GVL.TestSuiteAssertAddresses[Counter] = DWORD_TO_ULINT( ADR(This^)) THEN GVL.TestSuiteAddresses[Counter].SetTestFailed(TestName := GVL.CurrentTestNameBeingCalled); END_IF END_FOR

**EDIT; Another solution was suggested by a projectmember of mine; **

   IF GVL.TestSuiteAssertAddresses[Counter] = ANY_TO_ULINT( ADR(this^) ) THEN
       ...
   END_IF

AssertArrayEquals should have REAL and LREAL versions

AssertEquals_REAL and AssertEquals_LREAL accept an extra delta parameter for specifying acceptable precision for equality.

There currently don't seem to be any AssertArrayEquals methods for REAL and LREAL types, presumably because of the invalidity of doing naive equality comparisons for floating point types. But it would be nice if there were floating point array equality assertions which have the same delta parameter as the individual value equality asserts.

It seems like a fairly straightforward thing to implement at first glance. Would you like me to draft an implementation and submit a PR?

Tests with timers inside the FB

Currently tests only run for one scan.

This makes running tests on FBs with internal timers difficult. There is a workaround, but it would be great to have tests mean to run for a minimum time.

An empty test suite causes the test report to never show

If you have a test suite with nothing in the body of the FB (i.e. if no test methods are called, and thus TEST() and TEST_FINISHED() are never called in the test suite), the final test report never gets logged. This is likely due to the runner thinking it is still running tests.

I thought this might be a quick fix by modifying FB_TestSuite's AllTestsFinished method and making it print a warning and return true if NumberOfTests is 0, but it seems like on the first cycle, NumberOfTests is always zero. That might be easily fixable by changing the order of execution of some things, but I haven't had a chance to take a deeper look yet.

Add a way to indicate a specific test from the suite has finished

Relating to: #37

I was trying to use tcunit to test a statechart, using a case statement. I acknowledge this is somewhat outside the normal realm of unit testing. Regardless, using a case statement to change which test is running for a given cycle works, with some caveats, and this leads to a "bug" with tcunit. tl;dr TcUnit does not currently support nested tests, but it can!

Tests apparently register themselves with the test engine, and this registration includes a boolean to indicate the test is complete. If all test suites and tests indicate they are done, when tcunit runs, then tcunit processes the results and sends them for logging ( I think).

If your test suite is using a case statement to remain focused on a single test at a time, your first test might not get registered before tcunit finishes. So to counter this, you can either make sure the first case has a "wrapper" test, or your first test starts in the first case. Then as you move from case to case you indicate each test in that case is finished. Tcunit apparently figures out the test context by the last TEST('') it sees (I think...).

If you use a wrapper-test, tcunit does not have a way of knowing which test context it's in after TEST_FINISHED is called. ie. Tcunit will not know it's back in the wrapper test context.

Here's the test suite example

Testing Switch case and non-Static data

Need help in unit testing the Switch case.

In the Below example:

  1. Always case:0 is executed as BA default value is 0 and for the next test BA=1 will not be effective .
  2. when BA=1 is made explicitly(which is what i wanted), only IF part is executed not ELSE part as OUT_Sollwert default value is 0 and the latest data will not be stored.

I would like to know:

  1. Is there any way, i could test this.
    2.Could you please share the unit testing example of switch case

`VAR_OUTPUT
OUT_Sollwert: REAL;
END_VAR

VAR
BA: INT;
END_VAR

CASE BA OF
0:
OUT_Sollwert:= data;
BA:= 1;
1:
IF(OUT_Sollwert < (Velocitytime)THEN
OUT_Sollwert:= OUT_Sollwert + (Velocity
time);
ELSE
OUT_Sollwert:= Endlage_oben;
BA:= 2;
END_IF
2:
`

Messaging should be done by an interface

The messaging that is done by ADS today (such as in FB_TestSuite.AddTest()) by calling Tc2_System.ADSLOGSTR() should be implemented by an interface, instead of being hard coded to always go to ADS. If we want to be able to log events such as this in a more general way so that we can log it to other sources (say, file-output, database, or any other future means).

Ignore assert-prints if test is a duplicate

If a duplicate test-name for a test-suite has been defined, it's not necessary to print the assertions that follow in that test. This is done today simply because they have different messages and expected/actual values.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.