Coder Social home page Coder Social logo

dd-sdk-swift-testing's Introduction

Datadog SDK for Swift testing

This SDK is part of Datadog's CI Visibility product. A more comprehensive and updated documentation can be found at CI Visibility - Swift Tests

Getting Started

Link your test targets with the framework (you can use SPM or direct linking )

Set the following environment variables to the Test Action:

DD_TEST_RUNNER=1
DATADOG_CLIENT_TOKEN=<your current client token>
SRCROOT=$(SRCROOT)

You my want to set other environment variables to

DD_ENV=<The environment you want to report>
DD_SERVICE=<The name of the service you want to report>
DD_SITE=<The Datadog site to upload results to>

Depending on your CI service, you must also set the environment variables to be read from the test executions. See CI Providers Environment Variables DDEnvironmentValues.swift for details of your specific CI.

UITests

For UITests, both the test target and the application running from the UITests must link with the framework, environment variables only need to be set in the test target, since the framework automatically injects these values to the application.

Auto Instrumentation

Boolean variables can use any of: "1","0","true","false", "YES", "NO" String List variables accepts a list of elements separated by "," or ";"

Enabling Auto Instrumentation

DD_ENABLE_STDOUT_INSTRUMENTATION # Captures messages written to `stdout` (e.g `print()` ) and reports them as Logs. (Implies charges for Logs product) (Boolean)
DD_ENABLE_STDERR_INSTRUMENTATION #  Captures messages written to `stderr` (e.g `NSLog()`, UITest steps ) and reports them as Logs. (Implies charges for Logs product) (Boolean)

Configuring Network Auto Instrumentation

For Network autoinstrumentation there are other settings that you can configure

DD_DISABLE_NETWORK_INSTRUMENTATION # Disables all network instrumentation (Boolean)
DD_DISABLE_HEADERS_INJECTION # Disables all injection of tracing headers (Boolean)
DD_INSTRUMENTATION_EXTRA_HEADERS # Specific extra headers that you want the tool to log (String List)
DD_EXCLUDED_URLS # Urls that you dont want to log or inject headers into(String List)
DD_ENABLE_RECORD_PAYLOAD # It enables reporting a subset of the payloads in requests and responses (Boolean)
DD_MAX_PAYLOAD_SIZE # It sets the maximum size that will be reported from the payload, 1024 by default (Integer)

You can also disable or enable specific autoinstrumentation in some of the tests from Swift or Objective-C by importing the module DatadogSDKTesting and using the class: DDInstrumentationControl.

Disable crash handling

You should never need to do it, but in some very specific cases you may want to disable crash reporting for tests (e.g. you want to test your own crash handler, ... ):

DD_DISABLE_CRASH_HANDLER # Disables crash handling and reporting. (Boolean) WARNING, read note below

You must know that if you disable crash reporting, crashing tests wont be reported to the backend and wont appear as a failure. If you really, really need to do this for any of your tests, run it as a totally separated target, so you dont disable it for the rest of the tests

Custom tags

Environment variables

You can use DD_TAGS environment variable. It must contain pairs of key:tag separated by spaces. For example:

DDTAGS=tag-key-0:tag-value-0 tag-key-1:tag-value-1

If one of your values starts with the $ character, it will be replaced with the environment variable with the same name if it exists, example:

DDTAGS=home:$HOME

It also supports replacing a environment variable at the beggining of the value if contains non env variables supported characters (a-z, A-Z or _):

FOO = BAR
DD_TAGS=key1:$FOO-v1 // expected: key1:BAR-v1

Using OpenTelemetry (only for Swift)

Datadog swift testing framework uses OpenTelemetry as the tracing technology under the hood. You can access the OpenTelemetry tracer using DDInstrumentationControl.openTelemetryTracer and can use any OpenTelemetry api. For example, for adding a tag/attribute

import DatadogSDKTesting
import OpenTelemetryApi

let tracer = DDInstrumentationControl.openTelemetryTracer as? Tracer
let span = tracer?.spanBuilder(spanName: "ChildSpan").startSpan()
span?.setAttribute(key: "OTTag2", value: "OTValue2")
span?.end()

The test target needs to link explicitly with opentelemetry-swift.

Using Info.plist for configuration

Alternatively to setting environment variables, all configuration values can be provided by adding them to the Info.plist file of the Test bundle (not the App bundle). If the same setting is set both in an environment variable and in the Info.plist file, the environment variable takes precedence.

Manual testing API

If you use XCTests with your Swift projects, the DatadogSDKTesting framework automatically instruments them and sends the results to the Datadog backend. If you don't use XCTest, you can instead use the Swift/Objective-C manual testing API, which also reports test results to the backend.

The API is based around three concepts: test sessions, test suites, and tests.

Test sessions

A test session includes the whole process of running the tests, from when the user launches the testing process until the last test ends and results are reported. The test session also includes starting the environment and the process where the tests run.

To start a test session, call DDTestSession.start() and pass the name of the module or bundle to test.

When all your tests have finished, call session.end(), which forces the library to send all remaining test results to the backend.

Test Suites

A test suite comprises a set of tests that share common functionality. They can share a common initialization and teardown, and can also share some variables.

Create test suites in the test session by calling session.suiteStart() and passing the name of the test suite.

Call suite.end() when all the related tests in the suite have finished their execution.

Tests

Each test runs inside a suite and must end in one of these three statuses: pass, fail, or skip. A test can optionally have additional information like attributes or error information.

Create tests in a suite by calling suite.testStart() and passing the name of the test. When a test ends, one of the predefined statuses must be set.

API interface

class DDTestSession {
    // Starts the session.
    // - Parameters:
    //   - bundleName: Name of the module or bundle to test.
    //   - startTime: Optional. The time the session started.
    static func start(bundleName: String, startTime: Date? = nil) -> DDTestSession
    //
    // Ends the session.
    // - Parameters:
    //   - endTime: Optional. The time the session ended.
    func end(endTime: Date? = nil)
    // Adds a tag/attribute to the test session. Any number of tags can be added.
    // - Parameters:
    //   - key: The name of the tag. If a tag with the same name already exists,
    //     its value will be replaced by the new value.
    //   - value: The value of the tag. Can be a number or a string.
    func setTag(key: String, value: Any)
    //
    // Starts a suite in this session.
    // - Parameters:
    //   - name: Name of the suite.
    //   - startTime: Optional. The time the suite started.
    func suiteStart(name: String, startTime: Date: Date? = nil) -> DDTestSuite
}
    //
public class DDTestSuite : NSObject {
    // Ends the test suite.
    // - Parameters:
    //   - endTime: Optional. The time the suite ended.
    func end(endTime: Date? = nil)
    // Adds a tag/attribute to the test suite. Any number of tags can be added.
    // - Parameters:
    //   - key: The name of the tag. If a tag with the same name already exists,
    //     its value will be replaced by the new value.
    //   - value: The value of the tag. Can be a number or a string.
    func setTag(key: String, value: Any)
    //
    // Starts a test in this suite.
    // - Parameters:
    //   - name: Name of the test.
    //   - startTime: Optional. The time the test started.
    func testStart(name: String, startTime: Date: Date? = nil) -> DDTest
}
    //
public class DDTest : NSObject {
    // Adds a tag/attribute to the test. Any number of tags can be added.
    // - Parameters:
    //   - key: The name of the tag. If a tag with the same name already exists,
    //     its value will be replaced by the new value.
    //   - value: The value of the tag. Can be a number or a string.
    func setTag(key: String, value: Any)
    //
    // Adds error information to the test. Only one errorInfo can be reported by a test.
    // - Parameters:
    //   - type: The type of error to be reported.
    //   - message: The message associated with the error.
    //   - callstack: Optional. The callstack associated with the error.
    func setErrorInfo(type: String, message: String, callstack: String? = nil)
    //
    // Ends the test.
    // - Parameters:
    //   - status: The status reported for this test.
    //   - endTime: Optional. The time the test ended.
    func end(status: DDTestStatus, endTime: Date: Date? = nil)
}
    //
// Possible statuses reported by a test:
enum DDTestStatus {
  // The test passed.
  case pass
  //
  //Test test failed.
  case fail
  //
  //The test was skipped.
  case skip
}

Code example

The following code represents a simple usage of the API:

import DatadogSDKTesting
let session = DDTestSession.start(bundleName: "ManualSession")
let suite1 = session.suiteStart(name: "ManualSuite 1")
let test1 = suite1.testStart(name: "Test 1")
test1.setTag(key: "key", value: "value")
test1.end(status: .pass)
let test2 = suite1.testStart(name: "Test 2")
test2.SetErrorInfo(type: "Error Type", message: "Error message", callstack: "Optional callstack")
test2.end(test: test2, status: .fail)
suite1.end()
let suite2 = session.suiteStart(name: "ManualSuite 2")
..
..
session.end()

Always call session.end() at the end so that all the test info is flushed to Datadog.

Contributing

Pull requests are welcome. First, open an issue to discuss what you would like to change. For more information, read the Contributing Guide.

License

Apache License, v2.0

dd-sdk-swift-testing's People

Contributors

albertvaka avatar amonshiz avatar fermayo avatar juan-fernandez avatar nachobonafonte avatar ypopovych avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dd-sdk-swift-testing's Issues

Flakiness detection not working properly with retry-tests-on-failure and test-iterations xcodebuild parameters

Flakiness detection not working as expected

We are using a feature introduced in Xcode 13, automatic test retries. This can be invoked used passing the following parameters to xcodebuild:

-retry-tests-on-failure -test-iterations X

where X is the number of retries you want to run a test in case it fails. Here is an example of the xcresult this produces:
Screenshot 2022-10-31 at 17 48 15

Currently, this test run does not trigger flaky test detection.
From DataDog UI:
Screenshot 2022-10-31 at 17 51 02
Screenshot 2022-10-31 at 17 51 26

I think that this case should be handled as a flaky test, since in the same run it was run multiple times and it failed twice and finally passed.
In the "How are flaky tests defined?" help section I see that:

Flaky tests are tests that exhibit both a passing and failing status across multiple test runs for the same commit. If you commit some code and run it through CI, and a test fails, and you run it through CI again and the test passes, that test is unreliable as proof of quality code.

Not sure, but it seems that the issue here is that it only takes into consideration different runs, and in this case we have only one run.

Is this expected behavior? It makes flakiness detection really bad for us. Is there anything we can do at the configuration level to make this work the way we want?

SPM unable to resolve 2.1.1

The issue

We recently bumped our dependency from the following resolved SPM definition

{
  "package": "dd-sdk-swift-testing",
  "repositoryURL": "https://github.com/DataDog/dd-sdk-swift-testing",
  "state": {
    "branch": null,
    "revision": "707ba6cebf51ec3bba0697b463450973f0530302",
    "version": "2.1.0"
  }
}

to the below

{
  "package": "dd-sdk-swift-testing",
  "repositoryURL": "https://github.com/DataDog/dd-sdk-swift-testing",
  "state": {
    "branch": null,
    "revision": "abb4da0970a96686b3c07476d4f4eaae16547c52",
    "version": "2.1.1"
  }
}

Now our developers are getting the below error when attempting to resolve packages within Xcode

Revision abb4da0970a96686b3c07476d4f4eaae16547c52 for dd-sdk-swift-testing remoteSourceControl https://github.com/DataDog/dd-sdk-swift-testing version 2.1.1 does not match previously recorded value 53f53728eda43f3e39a1066128131ed37d378640

Dependency Manager:

SPM

Xcode version:

Xcode 13.4.1
Xcode 14.0 beta 6

macOS version:

macOS Monterey 12.4 (21F79)

Can't archive app due to missing bitcode

The issue

I'm using this sdk to instrument UI tests, and followed the instructions to add the sdk to my target app. Running the tests works fine, but when I try to archive the app I get the below error:

ld: '<derived data path>/Build/Intermediates.noindex/ArchiveIntermediates/<app name>/BuildProductsPath/Release-iphoneos/DatadogSDKTesting.framework/DatadogSDKTesting' does not contain bitcode. You must rebuild it with bitcode enabled (Xcode setting ENABLE_BITCODE), obtain an updated library from the vendor, or disable bitcode for this target. file '<derived data path>/Build/Intermediates.noindex/ArchiveIntermediates/<app name>/BuildProductsPath/Release-iphoneos/DatadogSDKTesting.framework/DatadogSDKTesting' for architecture arm64

Is there some sort of workaround for this? I don't see a way to link the SDK only if I'm running tests. Are you able to publish future versions of the sdk with bitcode enabled?


Datadog SDK version:

1.0.0

Last working Datadog SDK version:

n/a

Dependency Manager:

SPM and Cocoapods. This sdk is installed using SPM.

Other toolset:

No

Xcode version:

Xcode 13.1 (13A1030d)

Swift version:

Whatever ships with Xcode 13.1

Deployment Target:

iOS 13, iPhone + iPad

macOS version:

macOS Big Sur 11.6 (20G165)

CircleCI iOS Setup

The thing

We have a monorepo project for our iOS and macOS app. Under the Tests tab, it's only showing the Test Service mac-pro-server while our ios-pos project never shows up.

The tests results are definitely getting reported properly because both Test Services are showing results under the Test Runs tab. However, the ios-pos Test Service doesn't show the git branch name.

I have them setup with the below envVars using Xcodegen:

iOS Environment
"DD_TEST_RUNNER": $(DD_TEST_RUNNER)
"DATADOG_CLIENT_TOKEN": $(DATADOG_CLIENT_TOKEN)
"DD_SERVICE": "ios-pos"
"DD_ENV": "$(DD_ENV)"
"SRCROOT": "$(SRCROOT)"
macOS Environment
"DD_TEST_RUNNER": $(DD_TEST_RUNNER)
"DATADOG_CLIENT_TOKEN": $(DATADOG_CLIENT_TOKEN)
"DD_SERVICE": "mac-pro-server"
"DD_ENV": "$(DD_ENV)"
"SRCROOT": "$(SRCROOT)"

They are both using the same Client Tokens and same Environment variables except for the DD_SERVICE var.
Is there something wrong with my setup? Any help on fixing this is greatly appreciated!

[Question] Manual launch support

Support loading the tool manually

Loading the tool automatically is really nice and works for most use cases. It also helps keeping things simple. We have a use case, though, in which manual config would be really helpful: running tests in Amazon Device Farm. When running UI Tests in ADF, we do not trigger the run from Xcode. Instead, we upload both IPA files for the runner and for the target app to a bucket and ADF will orchestrate the test run. It'd be great for such use case if we could manually load the tool with a custom config directly from the code.

What are your thoughts?

Manual reporting of test results no longer working

The manual reporting of test results (see #47) has stopped working on our system.

I updated to the latest version and switched to using DD_API_KEY, and updated how we trigger it to something like this:

DD_API_KEY={apiKey} DD_TEST_RUNNER=1 DD_SERVICE={ourService} SRCROOT=. DD_DISABLE_CRASH_HANDLER=1 DD_ENV=ci swift run {executableTarget} {ourParameters}

The results were previously reported nicely on https://app.datadoghq.com/ci/test-services, but they don't show up anymore.


I tried set DD_DEBUG_TRACE=1 and I this output from Datadog at the start of the run:

[DatadogSDKTesting]Library loaded and active. Instrumenting tests.
[Debug][DatadogSDKTesting]Loaded images: {long list}

And this at the end:

[Debug][DatadogSDKTesting]DDCFMessageID.forceFlush finished

But nothing in between.

Any help or advice would be much appreciated!

Working locally, failing in the CI (Jenkins)

Fails in Jenkins

Thinks are working for me locally with version 0.9.4 (integrated using Cocoapods and configured via scheme at the moment), but in Jenkins tests are crashing πŸ˜•
We have three test targets and got the following errors:

Testing failed:
	XXXXXTests:
		App (17710) encountered an error (Early unexpected exit, operation never finished bootstrapping - no restart will be attempted. (Underlying Error: Test crashed with signal ill before starting test execution. If you believe this error represents a bug, please attach the result bundle at …/xx.xcresult))
	YYYYYTests:
		xctest (17753) encountered an error (Early unexpected exit, operation never finished bootstrapping - no restart will be attempted. (Underlying Error: Test crashed with signal ill before starting test execution. If you believe this error represents a bug, please attach the result bundle at …/xx.xcresult))
	ZZZZZTests:
		xctest (17784) encountered an error (Early unexpected exit, operation never finished bootstrapping - no restart will be attempted. (Underlying Error: Test crashed with signal ill before starting test execution. If you believe this error represents a bug, please attach the result bundle at …/xx.xcresult))

I know this is very little information. I downloaded the xcresult but it just shows the same information. Is there a way that we can enable more verbose logging from the library so that we can troubleshoot better?

Don't embed the testing SDK dependencies in XCFramework

The thing

Current setup with xcframework

opentelemetry-swift <- dd-sdk-swift-testing <- my app
|
opentelemetry-swift <- dd-sdk-ios <------------

this causes duplicate symbol problem, ideally opentelemetry-swift must be not embedded in the framework but should be added a transitive dependency.

Not getting any results using SPM

I've integrated the package using SPM. The SDK I'm trying to integrate is tested on Bitrise CI and only uses SPM. Therefore, there's no way to link the package through the Xcode interface.

The only way I could make it work is by explicitly adding an import to one of my test files: turned out that this didn't fix it on CI. I'm investigating further.

import DatadogSDKTesting

Is there another way to force link the library for SPM Package testing?

Issues with network instrumentation after bumping to 2.2.0

UI Test execution increase after bumping to 2.2.0

TL;DR
Starting in version 2.2.0, network instrumentation adds between 2-3 minutes delay to each test execution.

We were using an older version of the SDK in our apps. We just migrated to the latest version and noticed that UI tests went from ~15 seconds each to ~190 seconds.
We have been testing different versions of the lib until we pinned it down to 2.2.0

The behavior that we are observing is that the test takes a very long time to start. Once it finally starts, it has a normal execution time.

We have only noted this for UI tests, not unit tests. After researching, I think this is related to the fact that for UI tests we do some real network requests to setup stubs in a mock server. We see this in the logs:

2023-06-12 10:48:05.175739+0200 XXXXX-Runner[6034:5463357] [Client] Updating selectors after delegate addition failed with: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated: failed at lookup with error 3 - No such process." UserInfo={NSDebugDescription=The connection to service named com.apple.commcenter.coretelephony.xpc was invalidated: failed at lookup with error 3 - No such process.}
2023-06-12 10:49:58.153925+0200 XXXXX-Runner[6034:5464533] [connection] nw_connection_add_timestamp_locked_on_nw_queue [C3] Hit maximum timestamp count, will start dropping events

There are multiple occurrences of the error com.apple.commcenter.coretelephony.xpc was invalidated, and then after Hit maximum timestamp count, will start dropping events the test starts running as expected.

Setting DD_DISABLE_NETWORK_INSTRUMENTATION to true fixes the issue, but I wonder what has changed in the library that makes this a problem all of a sudden.

This is not a problem for us since we don't rely on network instrumentation for the tests, but you might want to take a look at the issue.

Disable NTPClock

While investigating crashes within the SDK we noticed that it appears to be entirely due to something within the Network library. That library is used by the NTPClock via the NTPServer type. It would be great if we could set an environment variable to disable usage of NTPClock and instead use the default Date type.

An example of a crash we experience:

Crashing thread stack:

0    ArcCore                        0x12f810c50        allocator_shim::TryFreeDefaultFallbackToFindZoneAndFree(void*) (in ArcCore) (allocator_shim.cc:160)
1    libxpc.dylib                   0x1861f2da8        _xpc_dictionary_dispose
2    libxpc.dylib                   0x1861f2814        -[OS_xpc_object dealloc]
3    libsystem_trace.dylib          0x186241010        _os_activity_stream_reflect
4    libsystem_trace.dylib          0x186247920        _os_log_impl_stream 
5    libsystem_trace.dylib          0x186238ca0        _os_log_impl_flatten_and_send
6    Network                        0x18d36e0c4        networkd_settings_read_from_file()
7    Network                        0x18d36d1c8        networkd_settings_init
8    Network                        0x18cc13364        -[NWConcrete_nw_parameters initWithStack:]
9    Network                        0x18d15977c        nw_path_create_evaluator_for_endpoint_no_evaluate
10   Network                        0x18d1592f0        nw_path_create_evaluator_for_endpoint
11   Network                        0x18cfc8224        nw_nat64_v4_address_requires_synthesis
12   libsystem_info.dylib           0x18650323c        _gai_nat64_second_pass
13   libsystem_info.dylib           0x1864efca4        si_addrinfo         
14   libsystem_info.dylib           0x1864ef738        getaddrinfo         
15   DatadogSDKTesting              0x11789a8c0        -[NTPServer connectWithError:]
16   DatadogSDKTesting              0x11789ace4        -[NTPServer syncWithError:]
17   DatadogSDKTesting              0x117264360        <unknown> (JIT?)    
18   DatadogSDKTesting              0x11726f394        <unknown> (JIT?)    
19   libdispatch.dylib              0x186308400        _dispatch_client_callout
20   libdispatch.dylib              0x186309c40        _dispatch_once_callout
21   DatadogSDKTesting              0x11726a180        <unknown> (JIT?)    
22   libdispatch.dylib              0x186306874        _dispatch_call_block_and_release
23   libdispatch.dylib              0x186308400        _dispatch_client_callout
24   libdispatch.dylib              0x18630b4e0        _dispatch_queue_override_invoke
25   libdispatch.dylib              0x186319e98        _dispatch_root_queue_drain
26   libdispatch.dylib              0x18631a6c0        _dispatch_worker_thread2
27   libsystem_pthread.dylib        0x1864b4038        _pthread_wqthread   
28   libsystem_pthread.dylib        0x1864b2d94        start_wqthread      

This may be due to a poor interaction between the SDK and the Chromium PartitionAlloc, but a direct solution for us is the ability to disable NTPClock since we do not have control over the code that appears to crash in Network framework.

[Question] Cocoapods support

Cocoapods support

Is distributing this framework as a pod in the roadmap? Although SPM is great, there are still some limitations can prevent a team from adopting it. (e.g. impossibility to specify a package for just one build configuration)
I know you also provide direct download for the xcframework (I assume it's built with ABI stability), but I would like to use this with a dependency management tool.

[Question] Annotate tests with ownership

Annotating tests with ownership

In our projects we use Github codeowners functionality to decide which team is responsible for any given file. I wonder if there is a way to integrate this data into datadog test reports. We have noticed that there is a Test Code Owner facet but whe do not know where it is coming from.

Screenshot 2023-02-24 at 18 20 13

This would enable us to attribute issues with tests to teams, adding a lot of value to the tool. Is there any way of achieving this with the current solution? I have seen some actions around this, like #35 , but I am not sure how to configure it. (We are using version 2.1.2)

Thanks in advance!

Data Dog sdk was compiled without bitcode and it’s preventing from generating a release build

It's complaining about the library not supporting bitcode which we support in our app.

This is the error we get when trying to create a release build.

ld: '/Users/*****/Library/Developer/Xcode/DerivedData/Build/Products/Debug-iphoneos/DatadogSDKTesting.framework/DatadogSDKTesting' does not contain bitcode. You must rebuild it with bitcode enabled (Xcode setting ENABLE_BITCODE), obtain an updated library from the vendor, or disable bitcode for this target. file '/Users/****/Library/Developer/Xcode/DerivedData/Build/Products/Debug-iphoneos/DatadogSDKTesting.framework/DatadogSDKTesting' for architecture arm64

Bug detecting codeowners for tests

The library does not report ownership correctly when path pattern starts with /

We have noticed that in our project, when the path in the codeowners file begins with /, code ownership for that test is not assigned to the rightful owner.

Github codeowners supports path for the file to begin with /
As an example from Github's docs:

In this example, @octocat owns any file in an apps directory
anywhere in your repository.
apps/ @octocat

In this example, @doctocat owns any file in the /docs
directory in the root of your repository and any of its
subdirectories.
/docs/ @doctocat

https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners

For example, if we had the following codeowners file:

* @umbrella-team for any unowned stuff
/UITests/Suite1.swift @team1
UITests/Suite2.swift @team2

The tests in Suite2 will be correctly assigned to team2, but the tests in Suite1 will be assigned to umbrella-team

We are using version 2.1.2 of the library.

Manually report test progress and result

We have an internal testing tool written in Swift which is not based on XCTest. I really like how Datadog is visualising test results and cross-references tests to server calls being made, and I'm curious if our internal testing tool can be integrated with that.

Looking at DDTestObserver, it should be possible to allow manually reporting when a test started, and what the result is when it finished. Currently we'd have to reverse engineer how that observers integrates with XCTest and how it extracts suite name and test name, and then either report like that to XCTestObservationCenter or allow our testing tool to run using XCTestCase (which is frankly quite painful as they would have to be generated programatically which isn't well documented).

It'd be great if there was a documented/supported way in DatadogSDKTesting to support this by providing a public interface that allows reporting when a test started and with what result it finished.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.