Coder Social home page Coder Social logo

specs's People

Contributors

abay11 avatar boerick avatar bsriramprasad avatar codebrainz avatar davide-videotec avatar davidecrista avatar dfahlen avatar fschuetz04 avatar hansbusch avatar hb-alt avatar jflevesque-genetec avatar jmelancongen avatar johado avatar kieran242 avatar kumaakh avatar ma30002000 avatar maximebdrd avatar michael-msi avatar ocampana-videotec avatar oksana-tiushkina avatar oksanatyushkina avatar otxanh avatar sergeybogdanov avatar sujithhanwha avatar svefredrik avatar tchaloupka avatar tomasz-zajac avatar venki5685 avatar willysagefalk avatar zebity avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

specs's Issues

Clarify RecordingJobStateChange data

#2701

Current Problem
The DTT tests that the payload does not contain Sources information which absolutely makes sense to limit the size of the event.
The specification specifies that other information shall be provided .
Proposal
Add to 5.25.1
The device shall omit the Sources parameter when emitting the event.
Attachments (0)
Oldest first Newest first
Comments only

Change History (1)

Changed 4 months ago by hans.busch
Correction, our sample forgot to repeat the State information in the ElementItem. The DTT was just creating wrong output.
My conclusion is that the whole element item should be made optional since the only mandatory items state and token repeat the simple item parameters.
Note that this CR would probably require a relaxation of the DTT.

Send Notification from onvif server to client issue.

I'm trying to implement WS Base Notification send method, server was able to get Subscribe, Unsubscribe and Renew request and response is sent successfully to client. But when an event is triggered, server should sent notify response to client, server should initiate that response, how to implement that? i tried creating a new soap socket and tried sending through soap_serve___ns10__Notify(soap_notify) with the newly created soap socket, but it is not received at client.
Is this how notification is sent or is there any thing else that need to be handled at the server side? Is there any reference for this notification send if so please help on this.

Align rule source parameters

specs/doc/Analytics.xml

Lines 3132 to 3136 in 2f627a6

<tt:SimpleItemDescription Name="VideoSourceConfigurationToken"
Type="tt:ReferenceToken"/>
<tt:SimpleItemDescription Name="VideoAnalyticsConfigurationToken"
Type="tt:ReferenceToken"/>
<tt:SimpleItemDescription Name="Rule" Type="xs:string"/>

  1. Align name of overly long source and make optional

VideoSourceConfigurationToken => VideoSource

Mark the parameter as optional since typically there is a one to one mapping from analytics configuration to video source.

  1. Remove obsolete VideoAnalyticsConfigurationToken

The property is part of the analytics configuration. There is no need to report the analytics configuration since it must always match the configuration passed to Get/Modify.

How to Decode Password Digest

From the Onvif Documents, formula to get the Password Digest was obtained as follows :
Digest = B64ENCODE( SHA1( B64DECODE( Nonce ) + Date + Password ) )

I have ref camera, with the following details :
Digest : JRxYtIDJPbbd2cNy7DSUBc9jfm4=
Nonce : XOzsWFDjHUCy2Kftff1WljwAAAAAAA==
Date : 2021-10-08T06:30:37.019Z
Password : admin123

Im not able to get the Digest with the above details using online b64encode/b64decode tools

Is there any detailed steps how to generate this Digest value when Nonce, Date and Password is known to us?

AppMgmt Service State Event

At present in AppMgmt Service

Below event is defined

Topic:
tns1:AppMgmt/State

<tt:MessageDescription IsProperty="true">
 <tt:Source>
 <tt:SimpleItemDescription Name="AppID" Type="xs:string"/>
 </tt:Source>
 <tt:Data>
 <tt:SimpleItemDescription Name="State" Type="tam:State"/>
 <tt:SimpleItemDescription Name="Status" Type="xs:string"/>
 </tt:Data>
</tt:MessageDescription>

but cannot find where "tam:State" is defined.
looks like it should refer to below type,
"ans:AppState"

Device with two cameras

Hello,

I have a Hikvision DS-2TD8166-150ZH2F/V2, this device have to "cameras" inside, optical and thermal. I can move the camera with PTZ without problems but I cant use ZOOM in thermal camera, only with optical.

I try to select the thermal with MediaProfile token 201 but not working

`
media_profile = "Profile_201"
status = ptz.GetStatus({'ProfileToken': media_profile })
configuration = ptz.GetConfiguration({'PTZConfigurationToken': ptz.GetConfigurations()[0].token})

request = ptz.create_type('AbsoluteMove')
requestPtzStatus = ptz.create_type('GetStatus')

request.ProfileToken = media_profile.token
request.Position = status.Position
request.Speed = configuration.DefaultPTZSpeed

requestPtzStatus.ProfileToken = media_profile.token

request.Position.PanTilt.x = posx
request.Position.PanTilt.y = posy
request.Position.Zoom.x = posz
request.Speed.PanTilt.x = 0.1
request.Speed.PanTilt.y = 0.1
request.ProfileToken = media_profile.token
ptz.AbsoluteMove(request)

`

Help me please! I dont know to do!

Simplify PolylineArrayConfiguration

specs/doc/Analytics.xml

Lines 3427 to 3428 in 2f627a6

<tt:ElementItemDescription Name="LineSegments"
Type="tt:PolylineArrayConfiguration"/>

The counter definition includes a complex type PolylineArrayConfiguration which would need option definitions. Suggest to instead allow multiple occurrences of Polyline like
<tt:ElementItem Name="Segment">tt:Polyline <tt:Point .... > ... </tt:ElementItem>
<tt:ElementItem Name="Segment">tt:Polyline <tt:Point .... > ... </tt:ElementItem>

Introduce a maxOccurs attribute to signal the upper limit. Default is one.

Remove requirement: The sum of the likelihoods shall not exceed 1.

Some deep learning detectors can use a sigmoid output instead of softmax and then have overlapping class predictions with likelihoods not always summing to 1. I think the following requirement should be removed to allow more flexibility in the classification output.

<para>A Class Descriptor is defined as an optional element of the appearance node of an abject node. The class descriptor is defined by a list of object classes together with a likelihood that the corresponding object belongs to this class. The sum of the likelihoods shall not exceed 1.</para>

Align metadata mime types

ONVIF has originally defined the mime type application/x-onvif-metadata and later added mime types for

  • vnd.onvif.metadata.gzip for GZIP compressed
  • vnd.onvif.metadata.exi.onvif for EXI using ONVIF default compression parameters
  • vnd.onvif.metadata.exi.ext for EXI using compression parameters that are sent in-band

Question whether this is consistent with RFC 6713 which would expect suffixes like +xml or +gzip.

Additionally point is to clarify the mime types used for mp4 and maybe gstreamer.

Counting rule interpretation for multiple lines

While trying to make the counting rule more consistent I found a discrepancy regarding how to interpret a rule with multiple polylines. The current spec says that an object has to pass through all of the line segments to register as a count. That would eliminate the use-case of having one rule that defines many exits with multiple lines that would count as single points of entry and exit. So basically for that kind of use-case you would need many rules and then it would be aggregated outside of ONVIF in some other way.

See here for the proposal PR. The problem is unresolved. I think there are these alternatives:

  1. Keep current definition.
  2. Change definition to mean any line.
  3. Make it possible to define the behaviour with a parameter.

Clarify hierarchical transformation

Both Frame and Object define an optional Transformation.

Section 5.1.3.1 defines:
The appearance node starts with an optional transformation node which can be used to change from a frame-centric coordinate system to an object-centric coordinate system.

The sentence does not say how the resulting transform has to be applied. Options are:
1 object transform replaces frame transform
2 both transforms have to be applied starting with frame transform
3 both transforms have to be applied starting with object transform

IntList and FloatList versus IntAttrList and FloatAttrList

The element items together with the string definitions were consolidated back in 2019. Now Mir pointed with ext-2170 to the Analytics specification were the Option Type refers to the element definition and not the type definition. The definition itself is correct but seems to cause irritations by readers.

Aligning the type name with the element name would have no interoperability issues. Since the names are quite unique a global replace should be applicable.

Question to the Video working group whether to align the items?

Clarity on the privacy mask coordinates

In the medai2 spec under "5.2.2 Video source configuration"

All coordinate systems (e.g. Privacy Masks in the Media2 Service and Motion Regions in the Analytics service)
that apply to a video source configuration are based on the resulting image after applying the Bounds and then
Rotate to the source image.

but in section "5.10 Privacy Masks"

Normalized coordinates are defined.

Wrong ItemList order in Analytics Annex D1.1 and D1.2

In Analytics Specs Annex D1.1 "Spot Measurement Module" and D1.2 "Box Measurement Module" there is a wrong example:
The order inside the Data element is not correct:
1st shall be SimpleItemDescription
2nd hall be ElementItemDescription

This sequence is required by onvif schema. Please, find <xs:complexType name="ItemListDescription"> in onvif.xsd for details.
Besides, the namespace should be tt: for Data and for ElementItemDescription, not xs.

Debug Log Informations

Hi,
Im getting these debug logs :
SOAP 1.2 fault SOAP-ENV:Sender [no subcode]
"End of file or no input: message transfer interrupted"
Detail: [no detail]

These logs are because of soap->error - EOM. But as response I've added all the required parameters, still im getting these error messages. how can we avoid these debug logs and remove these.

Geolocation event correction

#2726

Description (last modified by sujith.raman)
In 8/20 VEWG Telco, when analyzing ticket (#2719) regarding request for adding geolocation event example under MQTT.
It was found like in core spec under section 8.8.11 Geo Location,
ElementItemDescription is used under source, which is basically a token.

Topic: tns1:Monitoring/GeoLocation
Event description:

<tt:MessageDescription IsProperty="true“>
tt:source
<tt:ElementItemDescription Name=“DeviceEntity“ Type=“tt:DeviceEntity"/>
</tt:source>
tt:Data
<tt:ElementItemDescription Name="CurrentLocation" Type=“tt:GeoLocation"/>
</tt:Data>
</tt:MessageDescription>

Proposal:

Topic: tns1:Monitoring/GeoLocation
Event description:
<tt:MessageDescription IsProperty="true“>
tt:source
<tt:SimpleItemDescription Name="Token" Type="tt:ReferenceToken "/>
</tt:source>
tt:Data
<tt:ElementItemDescription Name="CurrentLocation" Type=“tt:GeoLocation"/>
</tt:Data>
</tt:MessageDescription>

Profile T Support - Imaging Service Getting Failed

Hi,
I have to give profile T support for an ip camera. In ONVIF Conformance Test Tool,Imaging Service\Tampering Events* feature support and Imaging Service\Motion Alarm feature support steps are not getting passed.

referred document : https://www.onvif.org/wp-content/uploads/2021/06/ONVIF_Profiles_Conformance_Device_Test_Specification_21.06.pdf?ccc393&ccc393

as per this document, [page number - 102 section - 7] for imaging service, they are mentions to add function to GetServiceCapabilities , GetImagingSettings, GetOptions, SetImagingSettings apis. I have already added the code too. Even though step getting failed.

For GetEventProperties api, i have added RuleEngine (CellMotionDetector, TamperDetector) and VideoSource (MotionAlarm, GlobalSceneChange) responses for TopicSet, but the DUT did not return the specified topics.

How can i give a image service feature for T support to the device? is there any document to refer ?

Kindly help...Thanks in advance.

Content Filter Support

#2699

Description (last modified by hans.busch)
Problem description
When analyzing the implementation impact of Profile M we came across the below issue.
The specification mandates XPath based content filtering. There is no constraint for base notification, real time pullpoint and metadata event streaming.
With implementation verification we ran into the question where and to which extent content filtering needs to be implemented. The test tool tests a simplistic 2 parameter example.
For subscribe methods the XPath based expressions are just input and hence the complexity of the expression doesn't matter too much.
However for metadata streaming the value is input and output of the metadata configuration and requires to be stored in compiled and plaintext for output. Large expression may end up in many kbytes of text which may not be acceptable for a feature that is hardly used at all.
Options to move forward
a) Make content filtering optional for metadata streaming
b) Add a capability describing the maximum supported expression size. Include a lower limit to enable simple expressions as tested by the test tool.
Proposal
Add attribute to GetMetadataConfigurationOptions
<xs:attribute name="MaxContentFilterSize" type="xs:int"/>
MaxContentFilterSize A device signalling support for content filtering shall support expressions with the provided expression size.
Replace in section 5.2.8 Metadata Configuration of Media2
Event streaming can be enabled and controlled using topic filters. For topic filter configuration refer to section “Event Handling” of the ONVIF Core Specification.
by
Event streaming can be enabled and controlled using topic filters. A device signalling MaxContentFilterSize shall support content filtering. For topic and content filter configuration refer to section “Event Handling” of the ONVIF Core Specification.
Attachments (0)
Oldest first Newest first
Comments only
Change History (4)

Changed 4 months ago by hans.busch
Description modified (diff)

Changed 2 months ago by fredrik.svensson
I'm fine with option A since this feature is very rarely used. Option B is also OK.

Changed 5 weeks ago by hans.busch
Description modified (diff)
Update proposal according to conclusion from telco to clearly state that content filtering on metadata streams is optional.

Changed 8 days ago by sujith.raman
Milestone changed from VEnhNx.5 to ChangeRequest
Change accepted and converting this ticket to changerequest based on 8/20 telco.

VideoEncoderDetails not listind Sub stream

Hi,
I have an ip camera and have started adding profile s support for this. In NVR Client, the device is not listing Video Encoder settings for Substream.
In GetProfiles api, details for all the profiles are already added. but still everytime GetStreamUri, GetVideoEncoderConfigurationOptions and GetVideoEncoderConfigurations apis are called with 1st profile token only. Is there anything other api in which profile details should be added apart from GetProfiles.

I've attached the response given by the device for GetProfiles and GetCapabilities.
GetCapabilities.txt
GetProfiles.txt

Please help me on this. Thanks in advance.

CreateRules missing requirement about rule creation with empty parameters

For Analytics Modules we have requirement to support create module with empty Parameter definition.
But Create Rules section 5.3.3.3 CreateRules does not have similar requirement.
(Ref: https://wush.net/trac/onvif-ext/ticket/2080#comment:2)

Proposal to add below requirement to 5.3.3.3 CreateRules
The device shall accept adding of analytics rules with an empty Parameter definition. Note that the resulted configuration may include a set of default parameter values.

How to Export Recorded Data from an ONVIF Camera to my computer

I've tried getting the rtsp link back in time range but maybe onvif doesn't support that. So now I want to export camera recording onvif but ExportRecordedData api probably doesn't support doing that either.
Please let me know if there is a way to do that. sincerely thank.

Extensibility of event definitions

ONVIF events are not defined in a schema as other ONVIF structures which raises the question if you are allowed to extend ONVIF defined events with your own fields? For example, can I add ClassTypes to a LineDetector event like this:

<wsnt:NotificationMessage>
 <wsnt:Topic Dialect="http://www.onvif.org/ver10/tev/topicExpression/ConcreteSet">
 tns1:RuleEngine/LineDetector/Crossed
 </wsnt:Topic>
 <wsnt:Message>
 <tt:Message UtcTime="2008-10-10T12:24:57.789Z">
 <tt:Source>
 <tt:SimpleItem Name="VideoSource" Value="1"/>
 <tt:SimpleItem Name="AnalyticsConfiguration" Value="1"/>
 <tt:SimpleItem Name="Rule" Value="MyLine" />
 </tt:Source>
 <tt:Data>
 <tt:SimpleItem Name="ObjectId" Value="55"/>
 <tt:SimpleItem Name="ClassTypes" Value="person"/>
 </tt:Data>
 </tt:Message>
 </wsnt:Message>
 </wsnt:NotificationMessage>

I think it should be clarified in the Event spec what is allowed and not allowed to add to ONVIF defined events.

Missing enumerators in AudioClassType?

In this type, should it have enumerators rather than just mentioning in the comments/docs?

<xs:simpleType name="AudioClassType">

Like this:

  <xs:simpleType name="AudioClassType">
    <xs:annotation>
      <xs:documentation>
        AudioClassType acceptable values are;
        gun_shot, scream, glass_breaking, tire_screech   
      </xs:documentation>
    </xs:annotation>
    <xs:restriction base="xs:string"/>
  </xs:simpleType>

Becomes this:

<xs:simpleType name="AudioClassType">
  <xs:annotation>
    <xs:documentation>
      AudioClassType acceptable values are;
      gun_shot, scream, glass_breaking, tire_screech   
    </xs:documentation>
  </xs:annotation>
  <xs:restriction base="xs:string">
    <xs:enumeration value="gun_shot" />
    <xs:enumeration value="scream" />
    <xs:enumeration value="glass_breaking" />
    <xs:enumeration value="tire_screech" />
  </xs:restriction>
</xs:simpleType>

Add clarification to PlateType field and fix documentation

  1. Fix documentation issue with PlateType in technical specification

Existing : Description of the vehicle license plate type, e.g., ”Normal”, ”Police”, ”Trainning”.

  • Trainning is misspelt as well as such value is not in the tt:PlateType enum

Proposed: Description of the vehicle license plate type, e.g., ”Normal”, ”Police”, ”Diplomat”.

  1. Add reference to tt:PlateType in PlateType schema documentation

Why are some services missing the wsdl:service definition?

Hi,

I am trying to generate java code to implement all the wsdls, but they only generated the interface, and not the service/implementation. After some digging, it is because some of the wsdl's no longer define a service. I needed to manually add:

	<wsdl:service name="MediaService">
		<wsdl:port name="MediaPort" binding="trt:MediaBinding">
			<soap:address location="http://www.onvif.org/ver10/media/wsdl"/>
		</wsdl:port>
	</wsdl:service>

Any specific reason they are no longer present? Or we're they never present and did the library add them?

onvif camera's username

Hi, I need an onvif camera's username, password and id adress to develop an android app. How can I get these infos? I dont have a onvif ip cam.

GetSupportedMetadata in WSDL: incorrect description for Type

Ref: https://wush.net/trac/onvif-ext/ticket/2024
Ref: https://wush.net/trac/onvif-ext/ticket/1962

WSDL description for 'Type' in GetSupportedMetadata should refer to GetSupportedAnalyticsModules and NOT GetAnalyticsModules

<xs:element name="GetSupportedMetadata">
xs:complexType
xs:sequence
<xs:element name="Type" type="xs:QName" minOccurs="0">
xs:annotation
xs:documentationOptional reference to an AnalyticsModule Type returned from GetAnalyticsModules.</xs:documentation>
</xs:annotation>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>

Proposed change shall be submitted as pull request

Multiple VehicleInfo classifications

#2716

Description
Using Object/Class/Type you can specify several types since the element is unbounded, e.g. "Car" with Likelihood=0.4 and "Truck" with Likelihood=0.45. But with VehicleInfo only one type can be listed as maxOccurs=1. I propose to set maxOccurs=unbounded.
Proposal
Change
<xs:complexType name="Appearance">
xs:sequence
<xs:element name="Transformation" type="tt:Transformation" minOccurs="0"/>
<xs:element name="Shape" type="tt:ShapeDescriptor" minOccurs="0"/>
<xs:element name="Color" type="tt:ColorDescriptor" minOccurs="0"/>
<xs:element name="Class" type="tt:ClassDescriptor" minOccurs="0"/>
<xs:element name="Extension" type="tt:AppearanceExtension" minOccurs="0"/>
<xs:element name="GeoLocation" type="tt:GeoLocation" minOccurs="0"/>
<xs:element name="VehicleInfo" type="tt:VehicleInfo" minOccurs="0"/>
<xs:element name="LicensePlateInfo" type="tt:LicensePlateInfo" minOccurs="0"/>
<xs:element name="HumanFace" type="fc:HumanFace" minOccurs="0"/>
<xs:element name="HumanBody" type="bd:HumanBody" minOccurs="0"/>
<xs:element name="ImageRef" type="xs:anyURI" minOccurs="0"/>
<xs:element name="Image" type="xs:base64Binary" minOccurs="0"/>
<xs:any namespace="##any" processContents="lax" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:anyAttribute processContents="lax"/>
</xs:complexType>
To
<xs:complexType name="Appearance">
xs:sequence
<xs:element name="Transformation" type="tt:Transformation" minOccurs="0"/>
<xs:element name="Shape" type="tt:ShapeDescriptor" minOccurs="0"/>
<xs:element name="Color" type="tt:ColorDescriptor" minOccurs="0"/>
<xs:element name="Class" type="tt:ClassDescriptor" minOccurs="0"/>
<xs:element name="Extension" type="tt:AppearanceExtension" minOccurs="0"/>
<xs:element name="GeoLocation" type="tt:GeoLocation" minOccurs="0"/>
<xs:element name="VehicleInfo" type="tt:VehicleInfo" minOccurs="0" maxOccurs="unbounded"/>
<xs:element name="LicensePlateInfo" type="tt:LicensePlateInfo" minOccurs="0"/>
<xs:element name="HumanFace" type="fc:HumanFace" minOccurs="0"/>
<xs:element name="HumanBody" type="bd:HumanBody" minOccurs="0"/>
<xs:element name="ImageRef" type="xs:anyURI" minOccurs="0"/>
<xs:element name="Image" type="xs:base64Binary" minOccurs="0"/>
<xs:any namespace="##any" processContents="lax" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:anyAttribute processContents="lax"/>
</xs:complexType>

Swapped top and bottom coordinates

Top coordinates should have higher values than bottom since origo is in lower left corner:

<tt:BoundingBox left="20.0" top="30.0" right="100.0" bottom="80.0"/>

<tt:BoundingBox left="20.0" top="30.0" right="100.0" bottom="80.0"/>

<tt:BoundingBox left="25.0" top="30.0" right="105.0" bottom="80.0"/>

<tt:BoundingBox left="20.0" top="30.0" right="100.0" bottom="80.0"/>

<tt:BoundingBox left="20.0" top="30.0" right="100.0" bottom="180.0"/>

<tt:BoundingBox left="40.0" top="100.0" right="70.0" bottom="150.0" />

<tt:BoundingBox left="20.0" top="30.0" right="100.0" bottom="80.0"/>

<tt:BoundingBox left="20.0" top="30.0" right="100.0" bottom="80.0"/>

<tt:BoundingBox left="20.0" top="30.0" right="100.0" bottom="80.0"/>

Syntax errors in wsdl

#2715

syntax checker complained about two message name errors

GetDeviceIdReponse => GetDeviceIdResponse? 

UninstallAppRequest => UninstallRequest 

UninstallAppResponse => UninstallResponse 

Number two and three are uncritical since they have no effect on instance diagrams and generated code (applied to 20.06).

The first one is more problematic.

Generic recognition event fields - BoundingBox

#2722

At present under "Annex G. Recognition rule engines" in "Table G-2"

BoundingBox parameter description is mentioned as "Recognized object bounding box details"

It doesn't explain whether the bounding box is normalized.
Following CR #2691

Request to update the BoundingBox parameter description as below,

Proposal:

"Recognized object bounding box details. Assumes normalized screen limits (-1.0, 1.0)."

Clarify coordinate systems

#2691 accepted defect

My expectation is that all rules use normalized coordinates for area and line definitions.

Note that for the Motion Region Detector differing coordinates have been defined.

I suggest that generic graphical elements like polygons and lines must use normalized coordinates.

Proposal

Add to section 5.2.3.2 Option type definition

Both tt:Polyline and tt:Polygon shall use normalized coordinates as defined in 5.1.2.2.

CertPathValidationPolicy missing for the MQTT client to authenticate the server

#2721

Description
In the proposed specs EventBrokerConfig has a field CertificateID that is use to select the certificate that the MQTT client shall use to autenticate at TLS level instead of using credentials.
When a device uses protocols such as MQTTS or WSS, it will have to validate the certificate of the remote broker, which may not be of the same vendor of the device and, in general, we cannot assume that the device will be able to validate the certificate of the remote broker without defining a validation policy.
The solution is to add to BrokerConfig the field CertPathValidationPolicyID of type tas:CertPathValidationPolicyID and description The ID of the certification path validation policy used to validate the broker certificate. In case encryption is used but no validation policy is specified, the device shall not validate the broker certificate.

Changed 8 days ago by sujith.raman
Milestone changed from VEnhNx.5 to ChangeRequest
Note: Same change would be required in uplink configuration also.
Converting this ticket to CR based on 8/20 Telco

Polygon configuration missing from 'tt:ObjectDetection' rule parameters

Ref: Annex A.6 in Analytics Service Specification

Object detection rule is described as 'The object detection rule generates events when any of the configured object types are detected in the field of view'

But there is no 'region' or 'polygon' defined in Rule Description though there is 'DwellTime' which usually covers most of the 'violation' or 'loitering' use cases.

Hence this CR is added to propose adding 'Region' to the parameters list similar to what we have for 'Recognition' rule like LicensePlate.

Proposal

<tt:RuleDescription Name=“tt:ObjectDetection">
  <tt:Parameters>
    <tt:SimpleItemDescription Name="ClassFilter" Type=" tt:StringList"/>
    <tt:SimpleItemDescription Name=“ConfidenceLevel" Type="xs:float"/>
    <tt:SimpleItemDescription Name=“DwellTime" Type="xs:duration"/>
     <tt:ElementItemDescription Name=”Region” Type=”tt:Polygon”/> -------> NEW ELEMENT
  </tt:Parameters>
  <tt:Messages>
    <tt:Source>
      <tt:SimpleItemDescription Type="tt:ReferenceToken" Name="VideoSource"/>
      <tt:SimpleItemDescription Type="xs:string“ Name=" RuleName" />
    </tt:Source>
    <tt:Data>
      <tt:SimpleItemDescription Type=“tt:StringList" Name=“ClassTypes"/>
    </tt:Data>
    <tt:ParentTopic>tns1:RuleEngine/ObjectDetection/Object</tt:ParentTopic>
  </tt:Messages>
</tt:RuleDescription>

Note:

  • If this annex needs to be added to analytics profile, the same needs to be made 'normative'

Unclear how to use weight in ColorCluster

The text states that weight "denotes the fraction of pixels assigned to the representative colour" which I interpret as a value between 0 and 1 but the examples use values 5 and 90. See:

<para>Colour descriptor contains the representative colours in detected object/region. The representative colours are computed from image of detected object/region each time. Each representative colour value is a vector of specified colour space. The representative colour weight (Weight) denotes the fraction of pixels assigned to the representative colour. Colour covariance describes the variation of colour values around the representative colour value in colour space thus representative colour denotes a region in colour space. The following table lists the acceptable values for Colourspace attribute</para>

I propose to clarify this to say:
The representative colour weight (Weight) denotes the percentage of pixels assigned to the representative colour. The sum of all ColorCluster weights in a ColorDescriptor shall not exceed 100.

Handling of optional parameters

#2694

Bosch devices have numerous optional rule parameters like e.g. MinimumObjectSize to constrain the detection.
By default these constraints are inactive. Our current choice of implementation is to not provide them if inactive.
There are two issues:
A client has hard times to understand which parameters are mandatory and which are optional.
While adding is straight forward there is no way for a client to remove them.
A device side workaround could be to e.g. remove the parameter if the minimum is set to zero.
For handling the remove case see spin-off ticket #2704.
Proposal for signaling optional
Insert a new section 5.2.3.3 Occurrence
Configuration parameters may signal their allowed occurrence using the minOccurs attribute. Setting the value to zero signals that the parameter is optional.
Add to analytics.wsdl
<xs:complexType name="ConfigOptions">
...
<xs:attribute name="minOccurs" type="xs:int">
with annotation
Minimal number of occurrences. Defaults to one.

Changed 3 months ago by hans.busch
Description modified (diff)

Changed 2 months ago by stefan.andersson
If the attribute it just going to be used for signalling optional / mandatory than it could be a boolean instead of an integer, e.g:
<xs:attribute name="Mandatory" type="xs:boolean" use="optional" default="true">

Changed 5 weeks ago by hans.busch
Description modified (diff)

Changed 8 days ago by sujith.raman
Milestone changed from VEnhNx.5 to ChangeRequest
Change accepted and converting this ticket to CR based on 8/20 Telco.

Clarify origin of bounds coordinates

#2703

Related to #2691:

The region motion detector relates to the pixel based bounds definition of the VideoSourceconfiguration. I couldn't find any definition regarding the origin of the Y axis.

The following sentences from spec and wsdl indirectly tell that the origin is in the lower left corner. However during plugfest we observed different vendor implementations.

All coordinate systems (e.g. Privacy Masks in the Media2 Service and Motion Regions in the Analytics service) that apply to a video source configuration are based on the resulting image after applying the Bounds and then Rotate to the source image.

and

Rectangle specifying the Video capturing area. The capturing area shall not be larger than the whole Video source area.

Proposal

Add to 5.2.2 Video source configuration

The origin of the bounds is located in the upper left corner of the video source.

Regarding the Audioencoding mime names in media2

At present we have following definition in onvif.xsd, refering to IANA Media Type for audio encoding.

<xs:simpleType name="AudioEncodingMimeNames">
		<xs:annotation>
			<xs:documentation>Audio Media Subtypes as referenced by IANA (without the leading "audio/" Audio Media Type).  See also <a href="https://www.iana.org/assignments/media-types/media-types.xhtml#audio"> IANA Media Types</a>.</xs:documentation>
		</xs:annotation>
		<xs:restriction base="xs:string">
			<xs:enumeration value="PCMU"/>		<!-- G.711 uLaw -->
			<xs:enumeration value="G726"/>
			<xs:enumeration value="MP4A-LATM"/>		<!-- AAC -->
			<xs:enumeration value="mpeg4-generic"/>
		</xs:restriction>
	</xs:simpleType>

ISSUE:
In the predefined list we have G726 defined, but in the IANA link only G726-16,G726-24,G726-32,G726-40 are defined.
There is a mismatch in G726, other formats (PCMU,mpeg4-generic,MP4A-LATM) matches with IANA.

Is the VideoAnalyticsDevice service deprecated ?

The specification for VideoAnalyticsDevice service disappeared from onvif specifications in 2018.
Is it deprecated ? Is it replaced by the new Analytics service ?

We (in the company I work in) are developing an Analytics Device which is not a camera, so we need to give it a streaming URI to analyze. How can I do this with the new Analytics service ?
With the old service we would create a Receiver then pass the receiver token to CreateAnalyticsEngineControl, but we can't find anything similar with the new service.

If the old service is the only solution, how about using most recent encoding like H265 ? Are we limited to Media ver10 ?

Profile T support getting Failed

Hi,
I have to give profile T support for an ip camera. In ONVIF Conformance Test Tool, following steps are failed,

1.Event Service/Pull-Point Notification feature
2.Event Service/MaxPullPoint feature support
3. Media2 Service\Video\H.264 or Media2 Service\Video\H.265 feature support
4. Media2 Service\Snapshot URI feature support
5. Media2 Service\Metadata feature support
6. Media2 Service\Real-time Streaming feature support
7. Media2 Service\OSD feature support
8. Media2 Service/Media2 Events/Media/ProfileChanged feature support
9. Media2 Service/Media2 Events/Media/ConfigurationChanged feature support
10. Imaging Service\Tampering Events* feature support
11. Imaging Service\Motion Alarm feature support

referred document : https://www.onvif.org/wp-content/uploads/2021/06/ONVIF_Profiles_Conformance_Device_Test_Specification_21.06.pdf?ccc393&ccc393

as per this document, [page number - 102 section - 7] for imaging service, they are mentions to add function to GetServiceCapabilities , GetImagingSettings, GetOptions, SetImagingSettings apis. I have already added the code too. Even though step getting failed.

In What basis, have to add responses under which apis to get pass. How can i give a full T support to the device? is there any document to refer ?

Kindly help...Thanks in advance.

Improving LicensePlate and VehicleInfo example metadata with likelihoods

Ref: ONVIF Analytics Service specification

5.1.3.7 Vehicle information descriptor

  • This section does not have an example metadata XML sample

5.1.3.9 License plate information descriptor

  • This section XML sample does not have 'Likelihood' attribute

Created this CR based on the feedback received from our internal team for clarity to help developers.

Add clarification to Option type definition section in Analytics spec

#2708

Its not very clear for what parameter types options can be provided.

DTT is under assumption like for all parameters in the rule or analytics module options needs to be provided.

But for example if Boolean type is used as a parameter do we have to provide options for it ?

Suggestion to add following note.

PROPOSAL:

Add to 5.2.3.2 Option type definition

NOTE:

Device shall provide options, only for parameters with constraints (Parameters like Boolean can be excluded in options)

Correct typo NetworkNotSupported to NetworkConfigNotSupported in Core spec

Ref: https://wush.net/trac/onvif-ext/ticket/2123

  1. Core spec - change for NetworkConfigNotSupported:
    Name of capability is incorrect: NetworkNotSupported instead of NetworkConfigNotSupported
    8.2 Network
    A device shall support the commands defined in this section unless the NetworkNotSupported capability is signalled as 'True' confirming it doesn’t support network configuration.

  2. Additionally, Core spec does not have description of NetworkConfigNotSupported in the table in 8.1.2.3 GetServiceCapabilities.

Improve rule introducton section

<para>The following rules apply to static cameras. In case of a PTZ device, image-based rules should contain an additional ElementItem. The ElementItem identifies the position of the device for which the rule has been setup. The corresponding ElementItemDescription resembles the following:</para>

The section should introduce into the content and structure of Annex A.

Backward compatible way of handling network interfaces and user handling

#2717

Background

Profile M WG discussions led to believe network configuration and user handling related interfaces are not relevant to the profile
But DTT vendor raised an important question that ALL the tests related to CORE MANDATORY features are run for every profile. So unless we write an additional requirement in Profile OR relax requirement in CORE, tool cannot be modified. 

Here are the options under discussion

  1. For Network configuration

Option 1:

Modify Profile M specification "System" section to specify ONLY "GetNetworkProtocols" 

Option 2:

Add a capability flag say "NetworkConfgurationNotSupported" in CORE specification with requirement as below
    A device that signals support for the NetworkConfgurationNotSupported shall NOT support any of the below interfaces except "GetNetworkProtocols"
        Get and set hostname.
        Get and set DNS configurations.
        Get and set NTP configurations.
        Get and set dynamic DNS.
        Get and set network interface configurations.
        Enable/disable network protocols.
        Get and set default gateway.
        Get and set zero configuration.
        Get, set, add and delete IP address filter.
        Wireless network interface configuration 
  1. For User handling

    Add a capability flag say "UserHandlingNotSupported" in CORE specification with requirement as below
    A device that signals support for the UserHandlingNotSupported shall NOT support any user credential management interfaces via the GetUsers, CreateUsers, DeleteUsers and SetUser methods.
    There is no requirement at all in Profile M specification regarding this until now.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.