Coder Social home page Coder Social logo

apache / plc4x Goto Github PK

View Code? Open in Web Editor NEW
1.2K 63.0 390.0 117.96 MB

PLC4X The Industrial IoT adapter

Home Page: https://plc4x.apache.org/

License: Apache License 2.0

Java 74.19% CSS 0.01% Shell 0.09% Groovy 0.16% CMake 0.06% Python 1.82% C# 4.13% ANTLR 0.03% Ruby 7.28% C 3.63% Dockerfile 0.01% HTML 0.01% Go 8.28% XSLT 0.31% Makefile 0.01%
iot python java cpp c go net ab ads ethernetip

plc4x's Introduction

Maven central License Jenkins Build Status Last commit Twitter Java Platform compatibility Go Platform compatibility C Platform compatibility Python Platform Compatibility


Apache PLC4X Logo

The Industrial IoT adapter

The ultimate goal of PLC4X is to create a set of libraries, that allow unified access to any type of PLC


Table of contents


About Apache PLC4X

Apache PLC4X is an effort to create a set of libraries for communicating with industrial grade programmable logic controllers (PLCs) in a uniform way. We are planning on shipping libraries for usage in:

  1. Java
  2. Go
  3. C (not ready for usage)
  4. Python (not ready for usage)
  5. C# (.Net) (not ready for usage - abandoned)

PLC4X also integrates with other Apache projects, such as:

And brings stand-alone (Java) utils like:

  • OPC-UA Server: Enables you to communicate with legacy devices using PLC4X with OPC-UA.
  • PLC4X Server: Enables you to communicate with a central PLC4X Server which then communicates with devices via PLC4X.

It also provides (Java) tools for usage inside an application:

  • Connection Cache: New implementation of our framework for re-using and sharing PLC-connections
  • Connection Pool: Old implementation of our framework for re-using and sharing PLC-connections
  • OPM: Object-Plc-Mapping: Allows binding PLC fields to properties in java POJOs similar to JPA
  • Scraper: Utility to do scheduled and repeated data collection.

Getting started

Depending on the programming language, the usage will differ, therefore please go to the Getting Started on the PLC4X website to look up the language of choice.

Java

NOTE: Currently the Java version which supports building of all parts of Apache PLC4X is at least Java 19 (We have tested all versions up to Java 21), however it's only the Java Tool UI, that requires this right now. All other modules need at least Java 11.

See the PLC4J user guide on the website to start using PLC4X in your Java application: https://plc4x.apache.org/users/getting-started/plc4j.html

Developers

Environment

Currently, the project is configured to require the following software:

  1. Java 11 JDK: For running Maven in general as well as compiling the Java and Scala modules JAVA_HOME configured to point to that.
  2. Git (even when working on the source distribution)
  3. (Optional, for running all tests) libpcap/Npcap for raw socket tests in Java or use of passive-mode drivers
  4. (Optional, for building the website) Graphviz : For generating the graphs in the documentation

WARNING: The code generation uses a utility which requires some additional VM settings. When running a build from the root, the settings in the .mvn/jvm.config are automatically applied. When building only a sub-module, it is important to set the vm args: --add-exports jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED --add-exports jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED --add-exports jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED --add-exports jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED --add-exports jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED. In Intellij for example set these in the IDE settings under: Preferences | Build, Execution, Deployment | Build Tools | Maven | Runner: JVM Options.

A more detailed description is available on our website:

https://plc4x.apache.org/developers/preparing/index.html

For building PLC4C we also need:

All requirements are retrieved by the build itself

For building PLC4Go we also need:

All requirements are retrieved by the build itself

For building PLC4Py we also need:

  1. Python 3.7 or higher
  2. Python pyenv

For building PLC4Net we also need:

  1. DotNet SDK 6.0

With this setup you will be able to build the Java part of PLC4X.

The when doing a full build, we automatically run a prerequisite check and fail the build with an explanation, if not all requirements are meet.

Building with Docker

If you don't want to bother setting up the environment on your normal system, and you have Docker installed, you can also build everything in a Docker container:

   docker compose up

This will build a local Docker container able to build all parts of PLC4X and will run a maven build of the local directory inside this container.

The default build will run a local release-build, so it can also be used to ensure reproducible builds when releasing.

Per default will it store files locally:

  • Downloaded maven artifacts will go to out/.repository
  • Deployed artifacts will go to out/.local-snapshots-dir

The reason for this is, that otherwise the artifacts would be packaged in with the source-release artifact, resulting in a 12GB or more zip archive. However, saving it in the main target directory would make the build delete the local repo every time a mvn clean is run. The out directory however is excluded per default from the assembly descriptor, and therefore it is not included in the source zim.

Getting Started

You must have at least Java 11 installed on your system and connectivity to Maven Central (for downloading external third party dependencies). Maven 3.6 is required to build, so be sure it's installed and available on your system.

NOTE: When using Java 21 currently the Apache Kafka integration module is excluded from the build as one of the plugins it requires has proven to be incompatible with this version.

NOTE: There is a convenience Maven-Wrapper installed in the repo, when used, this automatically downloads and installs Maven. If you want to use this, please use ./mvnw or mvnw instead of the normal mvn command.

NOTE: When running from sources-zip, the mvnw might not be executable on Mac or Linux. This can easily be fixed by running the following command in the directory.

$ chmod +x mvnw

NOTE: If you are working on a Windows system, please use mvnw.cmd instead of ./mvnw in the following build commands.

Build PLC4X Java jars and install them in your local maven repository

./mvnw install

You can now construct Java applications that use PLC4X. The PLC4X examples are a good place to start and are available inside the plc4j/examples directory.

The Go drivers can be built by enabling the with-go profile:

./mvnw -P with-go install 

The Java drivers can be built by enabling the with-java profile:

./mvnw -P with-java install 

The C# / .Net implementation is currently in a work in progress state. In order to be able to build the C# / .Net module, you currently need to activate the: with-dotnet profiles.

./mvnw -P with-dotnet install

The Python implementation is currently in a somewhat unclean state and still needs refactoring. In order to be able to build the Python module, you currently need to activate the: with-python profiles.

./mvnw -P with-python install

In order to build everything the following command should work:

./mvnw -P with-c,with-dotnet,with-go,with-java,with-python,enable-all-checks,update-generated-code install

Community

Join the PLC4X community by using one of the following channels. We'll be glad to help!

Mailing Lists

Subscribe to the following mailing lists:

See also: https://plc4x.apache.org/mailing-lists.html

Twitter

Get the latest PLC4X news on Twitter: https://twitter.com/ApachePlc4x

Contributing

There are multiple forms in which you can become involved with the PLC4X project.

These are, but are not limited to:

  • Providing information and insights
  • Testing PLC4X and providing feedback
  • Submitting Pull Requests
  • Filing Bug-Reports
  • Active communication on our mailing lists
  • Promoting the project (articles, blog posts, talks at conferences)
  • Documentation

We are a very friendly bunch so don’t be afraid to step forward. If you'd like to contribute to PLC4X, have a look at our contribution guide!

Licensing

Apache PLC4X is released under the Apache License Version 2.0.

plc4x's People

Contributors

bjoernhoeper avatar ceos01 avatar chrisdutz avatar cptblaubaerac avatar dependabot[bot] avatar dlaboss avatar dominikriemer avatar etiennerobinet avatar foxpluto avatar github-actions[bot] avatar glcj avatar hongjinlin avatar hutcheb avatar julianfeinauer avatar justinmclean avatar nalim2 avatar nielsbasjes avatar niklasmerz avatar ottlukas avatar ottobackwards avatar quanticpony avatar rvs avatar sommermarkus avatar splatch avatar sruehl avatar takraj avatar thomas169 avatar timbo2k avatar turbaszek avatar vemmert avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

plc4x's Issues

ADS Big-Endian Support

For some reason the response of the PLC seems not to be in LITTLE_ENDIAN format.
Therefore, the Byte to Integer conversion in [1] results in a negative number.
 
The issue occurs during the symbolHandle creation process in [2].
 
I had a look at the ADS specification [3] and the response should be in LITTLE_ENDIAN format, but for some reason it is not.
 
As suggested by [~cdutz] we should create an option to override the endianness in the connection-string.
 
[1] https://github.com/apache/plc4x/blob/develop/plc4j/protocols/ads/src/main/java/org/apache/plc4x/java/ads/api/util/UnsignedIntLEByteValue.java#L39
[2] https://github.com/apache/plc4x/blob/develop/plc4j/drivers/ads/src/main/java/org/apache/plc4x/java/ads/connection/AdsAbstractPlcConnection.java#L187
[3] https://infosys.beckhoff.com/index.php?content=../content/1031/tcplclibutilities/html/TcPlcLibUtilities_AddOn_ByteOrder.htm&id=
 

Imported from Jira PLC4X-133. Original Jira may contain additional context.
Reported by: millecker.

Subscription to Items

This version implements the subscription to PLC Items under the philosophy "Don't call me, I'll call you".
The requested items are sent periodically by the PLC to the driver. The driver collects the data and sends it to the client.
This solution is ideal for systems that need rapid data updating and low resource consumption.

Currently only S7-300 and S7-400.
S7-1500 the next revision.

Imported from Jira PLC4X-184. Original Jira may contain additional context.
Reported by: cgarcia.

Pending threads after connection.close

I’m facing some issue when I try to quit my application. It seems connection.close() is not stopping all pending threads. Is this a known issue or am I doing something wrong?


String url = "s7://...";
PlcDriverManager manager = new PlcDriverManager();
PlcConnection connection
= manager.getConnection(url);
connection.close();
System.out.println("closed”);  // gets printed but
hangs afterwards

I created a thread dump which is attached.

Imported from Jira PLC4X-249. Original Jira may contain additional context.
Reported by: svoss.

Add Async API for new Connections

Currently we only support an async API for Requests.
But as creating a new Connection is sometimes even slower it would be good to provide an additional asyc API which returns a Future<PlcConnection> and leaves it to the Client to handle asynchronity.

Imported from Jira PLC4X-136. Original Jira may contain additional context.
Reported by: julian.feinauer.

[S7] Communication to S7 PLC dies in some situations

The following is from the Mailing Thread [1]:

In the last weeks we observed multiple times strange behavior when connecting to Siemens S7 devices.
We have not yet been able to trace it down entirely but I have the assumption that it is an issue with the PooledPlcDriverManager.

Whats the issue?
When doing requests (either via OPM or the “regular” API) we come at a point where all subsequent requests simply fail (and in some cases we were no longer able to send requests to the PLC from other instances, so it looks like the internal server went down).

Whats the setup?
When I remember correctly, all situations where this occurred used the Pool as Basis.
We had it both with OPM and the normal API but NOT with the Scraper.

I remember that I spent like a hole day at the Hackathon in Mallorca to get all timeout things to work correctly, as the S7 does not like when you simply cancel your request futures.
Currently there are two “suspects” from my side.

First, the pool calls the “.connect()” method on a now Connection it establishes but by API convention you also have to do that in your code so it gets called multiple times, which could fuck up stuff.
Second, connection can also time out (but its no future in our API) so in the Scraper I implemented it as Future with timeout (as I’m unsure how everything behaes if the pool starts to initialize a connection but then the “waitTime” times out and it abandons this).

[1] https://lists.apache.org/thread.html/328a6780b34b4fd2e3298e9e70340293ebb397b1978a7b631030067e@%3Cdev.plc4x.apache.org%3E

Imported from Jira PLC4X-132. Original Jira may contain additional context.
Reported by: julian.feinauer.

[BUG][S7]Request split / Message Split (Optimizer)

  • A request with many Items is divided within the limits allowed by the size of the PDU.

  • If one of the requested Items exceeds the size of the PDU it is trimmed to the maximum size of the PDU. The existing code tries to split the message, but it fails. This generates an unsafe condition.

  • Rethink the routine for handling long messages, as an additional layer in Netty.

Imported from Jira PLC4X-182. Original Jira may contain additional context.
Reported by: cgarcia.

Add minimal idiomatic Scala API

We want to implement an idiomatic Scala API:

  • no use of Java std library types
  • no use of PLC4X Java API types
  • no use of exceptions, instead of that use of Either
  • no use of Java futures, instead use of scala future

Imported from Jira PLC4X-12. Original Jira may contain additional context.
Reported by: britter.

Incorrect shutdown sequence on error

In the testcase below, there is a permission problem and I think because of that the closing of channel and buffers are out of order/state.

[code]
11:59:51.855 [pool-1-thread-1] WARN i.n.c.AbstractChannelHandlerContext - Failed to mark a promise as failure because it has succeeded already: DefaultChannelPromise@63ab80ee(success)
java.lang.IllegalStateException: close() must be invoked after the channel is closed.
at io.netty.channel.ChannelOutboundBuffer.close(ChannelOutboundBuffer.java:683)
at io.netty.channel.ChannelOutboundBuffer.close(ChannelOutboundBuffer.java:711)
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:741)
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:607)
at io.netty.channel.DefaultChannelPipeline$HeadContext.close(DefaultChannelPipeline.java:1352)
at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:622)
at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:606)
at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:472)
at io.netty.channel.DefaultChannelPipeline.close(DefaultChannelPipeline.java:957)
at io.netty.channel.AbstractChannel.close(AbstractChannel.java:232)
at io.netty.channel.ChannelFutureListener$2.operationComplete(ChannelFutureListener.java:56)
at io.netty.channel.ChannelFutureListener$2.operationComplete(ChannelFutureListener.java:52)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:183)
at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:95)
at io.netty.bootstrap.Bootstrap$3.run(Bootstrap.java:248)
at io.netty.channel.ThreadPerChannelEventLoop.run(ThreadPerChannelEventLoop.java:69)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:748)
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 3.26 s <<< FAILURE! - in org.apache.plc4x.java.utils.rawsockets.netty.RawSocketChannelTest
[ERROR] doConnect Time elapsed: 2.478 s <<< ERROR!
org.pcap4j.core.PcapNativeException: lo: You don't have permission to capture on that device (socket: Operation not permitted)
[code]

Imported from Jira PLC4X-205. Original Jira may contain additional context.
Reported by: niclas.

[Modbus] Apache NiFi processor throws java.io.IOException after a while

My Plc4xSourceProcessor's, PLC connection String is "modbus:tcp://10.0.2.238:502?slave=1" and PLC resource address String is "test1=holding-register:1"
 
I can get the values for 5-10 times with 5 second intervals but then I get below exception. I can read the values with the Modbus Poll application, so most probably the PLC4X side has a problem.
 
I may also get some other exceptions when starting the processor, which are also below.
 
PS: Wireshark trace is attached. I've read 16 times then I get the exception.
 
*
-- This is the exception which I get after some successful read operations.*
2020-08-27 13:19:06,091 WARN [nioEventLoopGroup-8-1] io.netty.channel.DefaultChannelPipeline An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(Unknown Source)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source)
at sun.nio.ch.IOUtil.read(Unknown Source)
at sun.nio.ch.SocketChannelImpl.read(Unknown Source)
at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Unknown Source)
 
 
-- This is the exception which I get sometimes when starting up the processor.*
2020-08-27 13:17:16,813 WARN [Timer-Driven Process Thread-11] o.a.n.controller.tasks.ConnectableTask Administratively Yielding Plc4xSourceProcessor[id=2f12f5b4-0174-1000-724e-a53ba0fc1652] due to uncaught Exception: java.lang.NullPointerException
java.lang.NullPointerException: null
at org.apache.plc4x.nifi.Plc4xSourceProcessor.onTrigger(Plc4xSourceProcessor.java:50)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1174)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)

Imported from Jira PLC4X-245. Original Jira may contain additional context.
Reported by: turker.tunali.

Handling of FLOAT and DOUBLE has issues with generated code

Currently the code generation still uses the Supplier-Based reading of floating-point values. This code seems to have issues with the value "0.0", which is displayed as an extremely small floating-point value.

When switching to ReadBuffer readFloat and readDouble implementation they seem to be causing errors.

Imported from Jira PLC4X-254. Original Jira may contain additional context.
Reported by: cdutz.

Docker build not working - mvnw permission denied etc

Hello, I'm trying to install PLC4X with Docker and get a couple of errors - 

$ docker build -t plc4x .
...
Step 21/39 : COPY . /ws/
---> e86e4d250f0c
Step 22/39 : WORKDIR /ws
---> Running in b2d9525763e0
Removing intermediate container b2d9525763e0
---> 2d714dffc27b
Step 23/39 : RUN ./mvnw -P with-boost,with-c,with-cpp,with-dotnet,with-go,with-logstash,with-opcua-werver,with-proxies,with-python,with-logstash,with-sandbox com.offbytwo.maven.plugins:maven-dependency-plugin:3.1.1.MDEP568:go-offline -DexcludeGroupIds=org.apache.plc4x,org.apache.plc4x.examples,org.apache.plc4x.sandbox
---> Running in 97e09acbcdfd
/bin/sh: 1: ./mvnw: Permission denied
The command '/bin/sh -c ./mvnw -P with-boost,with-c,with-cpp,with-dotnet,with-go,with-logstash,with-opcua-werver,with-proxies,with-python,with-logstash,with-sandbox com.offbytwo.maven.plugins:maven-dependency-plugin:3.1.1.MDEP568:go-offline -DexcludeGroupIds=org.apache.plc4x,org.apache.plc4x.examples,org.apache.plc4x.sandbox' returned a non-zero code: 126

So I added a step to the Dockerfile line 72
 
RUN chmod ****x ./mvnw
That got it past that error, then got another one -

Non-resolvable parent POM for org.apache.plc4x.sandbox:plc4cpp:[unknown-version]: Could not find artifact org.apache.plc4x.sandbox:plc4x-sandbox:pom:0.8.0-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 24, column 11

I'm not familiar with maven so am not sure how to get past it...

Thank you for any help - 

 

Imported from Jira PLC4X-284. Original Jira may contain additional context.
Reported by: bburnskm.

Reading data from PLC via Modbus with Camel using timer

I want to get data from plc with period = 1 ms. I try it using consumer:


val modbusConsumerEndpoint = Plc4XEndpoint(modbusEndpointUri, plcComponent).apply {
 tags = mapOf(fieldName
to "input-register:$plcInputPort")
 period = 1
}

from(modbusConsumerEndpoint)
               
.process {
                    it.message.body = (it.message.body as Map<String, Any>)[ModbusEndpointParams.fieldName]

                   it.setMainBody(ctx)
                }
                .marshal().json()
     
          .setHeader(KafkaConstants.KEY, constant(""))
                .to(mainKafkaEndpoint)

But I have only one loop. {color:#000000}Plc4XConsumer may work with trigger, but I couldn't find any examples for that.
{color}

{color:#000000}Either I try to use camel chain with timer:{color}


from("timer:foo?period=1")
 .process {
 it.message.body =
 mapOf(fieldName to "input-register:$plcInputPort")

}
 .to("plc4x:modbus://uri")

But Plc4XProducer works only for writting. I solve this problem by creating own Endpoint with custom Producer includes ReadRequestBuilder extend  Plc4XEndpoint and Plc4XProducer. It looks not like production decision.

What is a right way to do this task?

Imported from Jira PLC4X-292. Original Jira may contain additional context.
Reported by: dtadescu.

[PLC4C] Fix ReadBuffer test for Strings

I had to disable the ReadBuffer test for reading Strings in PLC4C as this was causing build errors on Windows

Imported from Jira PLC4X-287. Original Jira may contain additional context.
Reported by: cdutz.

Add profinet-DCP

I found some work done towards implementing the profinet-DCP protocol on the  feature/profinet branch].

I see that the profinet protocol is not on the develop branch anymore, was it replaced with the s7 protocol?

If someone would have the time to guide me, I could work on it to update, test, and bring this discovery feature to PLC4X.

Imported from Jira PLC4X-286. Original Jira may contain additional context.
Reported by: adrlzr.

Modbus - Kafka does not close the connections

Hello,

I have been testing the connector and Kafka connects for a few weeks now. To do these tests, I try to ingest measurements from a sensor using Modbus, and this measurement ingestion is done correctly. The problem arises when I decide to remove the connector, as this connection is not closed by default with my sensor, which leads me to serious problems (due to the limitation of devices that read by Modbus, limited to 4 users per manufacture). The only way I can close these connections is to restart the kafka connect container. Is there any option or way to force these connections to close? Additionally, here are the steps to replicate this error:

  1. Initially, I don't have any connector launched, so I don't have any active connection: active: 0, waiting: 0

  2. Then, I launch the connector using Kafka Rest API. At this moment, we're importing data into the kafka cluster. Using curl, we can see that the connector is working: curl -X 'GET' http://localhost:18083/connectors/ -> ["modbus-office"]. We can see right now that we only have 1 active connection: active: 1, waiting: 0

  3. I delete the kafka connector using the rest API: curl -X 'DELETE' http://localhost:18083/connectors/modbus-office. So, now, using the previous command from the previous point, we don't see any active connector, but, analysing the active connections from the sensor, it can be seen that there is one active connection: active: 1, waiting: 0.

This is what is giving me problems, as in theory, there should not be any active connections at the moment. We have made a proxy to limit the number of active connections, but so far, we have not been able to close it manually using the REST API of Kafka or some configuration of the connector.

Imported from Jira PLC4X-322. Original Jira may contain additional context.
Reported by: fdorado.

NIFI processors should only work with specific attributes

NIFI Flowfiles have 'automatic' attributes, that are there even if they aren't explicitly created by a flow processor.

The PLC4x processors use all the attributes when writing, this is incorrect. They will be writing the wrong things.

The usual pattern in NIFI is to have the attribute name follow a pattern, that can be read as a prefix, such as :

plc4x.address.

Then the processor looks for any attributes that start with that, and get the end of the address to use as the field name.

As is I would not think this processor would work in production.

Example of this pattern in NIFI:
https://github.com/apache/nifi/blob/7d20c03f89358a5d5c6db63e631013e1c4be4bc4/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/InvokeHTTP.java#L132

This is slightly different, as the processor above is looking for configured values, where I believe that the plc4x processor just want to write anything with the attribute name value.

This is doubly wrong, as attributes are kept in a map. There will never be more than one named value.

The proposed fix would be ->

  • change the reads attribute to use a prefix pattern as above
  • only write out those that match through regex / capture

Imported from Jira PLC4X-219. Original Jira may contain additional context.
Reported by: otto.

WIthout “S7 Driver running in ACTIVE mode.” Nothing response

[main] INFO org.apache.plc4x.java.PlcDriverManager - Instantiating new PLC Driver Manager with class loader sun.misc.Launcher$AppClassLoader@14dad5dc
[main] INFO org.apache.plc4x.java.PlcDriverManager - Registering available drivers...
[main] INFO org.apache.plc4x.java.PlcDriverManager - Registering driver for Protocol s7 (Siemens S7 (Basic))
[main] INFO org.apache.plc4x.java.transport.tcp.TcpChannelFactory - Configuring Bootstrap with Configuration{local-rack=1, local-slot=1, remote-rack=0, remot-slot=3, pduSize=1024, maxAmqCaller=8, maxAmqCallee=8, controllerType='null'}
[nioEventLoopGroup-2-1] INFO org.apache.plc4x.java.s7.readwrite.protocol.S7ProtocolLogic - S7 Driver running in ACTIVE mode.

Imported from Jira PLC4X-273. Original Jira may contain additional context.
Reported by: kdxq.

Mock driver always writes null value

The Mock driver is usefull for developing applications without available hardware. Reads from Mock devices work, but writes seem broken.

Scenario:

implement MockDevice interface and the write method. See that this device is set in the connection using the MockConnection.setDevice method.

When a WriteRequest is issued on the connection, the MockDevice.write method is indeed called, but the value parameter is always null. The given value to write is not passed through.

Debugging and looking into the sourcecode shows following:

MockConnection.write uses following statement to retrieve the value:

 


// MockConnection:147
((MockField) writeRequest.getField(name)).getPlcValue()

This retrieves the plcValue member from the MockField object. This  plcValue is written to by constructor MockField(String address, MockPlcValue plcValue) only. However, I did not find any call to this constructor.

 

Note:

Other Drivers seem to use


writeRequest.getPlcValue(fieldName) 

instead and do not have a value member in the Field class. (e.g. Simulator, Modbus )

Imported from Jira PLC4X-288. Original Jira may contain additional context.
Reported by: teslanet-nl.

Connection to United Manufacturing Hub

Hi everyone!

My name is Jeremy and I am the lead developer of the United Manufacturing Hub.

The United Manufacturing Hub is an open-source industrial IoT and manufacturing application platform enabling users to connect, store, and access all relevant data sources in industrial manufacturing sites and build user-centric dashboards and applications.

It would be really great if we could make a connection between PLC4X and the United Manufacturing Hub, e.g., in form of a configurable microservice that automatically takes data from various PLCs and pushes them into a MQTT broker. The United Manufacturing Hub (incl. Node-RED, Grafana, timescaleDB, VerneMQ) then provides the infrastructure and data models to contextualize the data and enable use-cases like OEE, Performance Management, Machine to Machine Communication, Digital Shadow, etc. Furthermore, the United Manufacturing Hub allows extracting data from other data sources as well like sensors, barcodereader or cameras.

Further information can be found here:

Looking forward to discussing how we can combine both open-source projects the most effective way.

Regards,
Jeremy

Imported from Jira PLC4X-315. Original Jira may contain additional context.
Reported by: JeremyTheocharis.

User manual. Driver-S7

The objective of the manual is to show in detail the characteristics of the developed Driver-S7, its potential and points of improvement.
It will be done incapie in the formats of items and in the best practices to optimize the communication process with Siemens S7 PLCs.
It will end with practical examples associated with Continuous (machinery) and Batch (Processes) processes.

Imported from Jira PLC4X-180. Original Jira may contain additional context.
Reported by: cgarcia.

[S7] Implement connection closing for S7 protocol

It seems I skipped porting the code to gracefully close a connection in the transition from 0.6 to 0.7. This is a great low-hanging fruit, so I'll leave this here for someone to picup.

We already generally have the parts in place, however they are not quite correct:

https://github.com/apache/plc4x/blob/develop/protocols/s7/src/main/resources/protocols/s7/s7.mspec

Defines a type: COTPPacketDisconnectRequest, however the third parameter is not class, but disconnectReason.

The old TPDU is defined here:
https://github.com/apache/plc4x/blob/rel/0.6/plc4j/protocols/iso-tp/src/main/java/org/apache/plc4x/java/isotp/protocol/model/tpdus/DisconnectRequestTpdu.java

This should be an enum type. The constant names and values can be taken from here:
https://github.com/apache/plc4x/blob/rel/0.6/plc4j/protocols/iso-tp/src/main/java/org/apache/plc4x/java/isotp/protocol/model/types/DisconnectReason.java

As soon as these changes are in place and the code has been generated, the logic for closing can be implemented by being inspired by the old drivers code:
https://github.com/apache/plc4x/blob/rel/0.6/plc4j/drivers/s7/src/main/java/org/apache/plc4x/java/s7/connection/S7PlcConnection.java

Imported from Jira PLC4X-271. Original Jira may contain additional context.
Reported by: cdutz.

plc4x s7 driver java for real type wirte error

Creating a write request for real type  such as 

via addItem("value-5", "%DB1.DBD16:REAL", 12.1)

 

when execute the request, error as below: 

'ERROR org.apache.plc4x.java.s7.readwrite.types.DataTransportErrorCode - No DataTransportErrorCode for value 7'

Imported from Jira PLC4X-308. Original Jira may contain additional context.
Reported by: lhs.

[Feature-Request][S7] Reading long Int Array (Optimizer)

Dear Chris,

I need to report an another bug.
As you know I am trying to read a very complex Data Bloc from a S7-1200, in the future I will try with a 1500. A picture of one of this block in in the attached picture.
I found a problem which actually is similar to the String problem I reported in PLC4X-240, in reading long array of Int.
The software running in the PLC as a component which actually samples an analog value a store the data in long Array of Integer or Real. An example is:


PLC_CellValue_2[400]='%DB2:928.0:INT[400]'

or


PLC_SpeedNotSafety[400]='%DB2:6542.0:REAL[400]'

Reading such a long array of values is not possible because the request on the wire is for a too huge payload (probably this is the same problem we had with the string).
I have attached the wireshark capture for you.

I did the comparison with the 0.6.0 version of the library and the array is correctly read. On the wire I see 3 read of 110 Integer and a last one of 70 which are my 400 Integer.
As for this reading I have captured the wireshark.

just to complete the big picture my final target is to read a big DB. In this complex scenario there a similar problem that I think is a data request size problem too.
If I try to read a list of variables like this one:


PLC_ReportColpoDateLast='%DB2:0.0:DATE_AND_TIME'
PLC_@timestamp='%DB2:12.0:DATE_AND_TIME'
PLC_ReportColpoDateLastID='%DB2:24.0:DINT'
PLC_ReportColpoDateID='%DB2:28.0:DINT'
PLC_RichiestaCurva='%DB2:32.0:INT'
PLC_TrasferimentoCurva='%DB2:34.0:BOOL'
PLC_ArchiveReport='%DB2:36.0:BOOL'
PLC_DeleteReport='%DB2:36.1:BOOL'
PLC_Report='%DB2:36.2:BOOL'
PLC_Enable='%DB2:36.3:BOOL'
PLC_SetCelloffset='%DB2:36.4:BOOL'
PLC_ResCelloffset='%DB2:36.5:BOOL'
PLC_IDX='%DB2:38.0:INT'
PLC_KW='%DB2:40.0:BOOL'
PLC_TemCen='%DB2:42.0:REAL'
PLC_TemBiella='%DB2:46.0:REAL'
PLC_TemBroDx='%DB2:50.0:REAL'
PLC_TemBroSx='%DB2:54.0:REAL'
PLC_PreCen='%DB2:58.0:REAL'
PLC_PreFreno='%DB2:62.0:REAL'
PLC_PreCil='%DB2:66.0:REAL'
PLC_EncPosAct='%DB2:70.0:REAL'
PLC_EncPosActSafety='%DB2:74.0:REAL'
PLC_EncSpeedSafety='%DB2:78.0:REAL'
PLC_EncSpeed='%DB2:82.0:REAL'
PLC_CellValue[5]='%DB2:86.0:INT[5]'
PLC_CellValueOffset[5]='%DB2:96.0:INT[5]'
PLC_N_TotPezzi='%DB2:106.0:DINT'
PLC_N_TotColpi='%DB2:110.0:DINT'
PLC_KW_Max='%DB2:114.0:INT'
PLC_CellValueMax[5]='%DB2:116.0:INT[5]'
PLC_CellValue_1[2]='%DB2:126.0:INT[2]'
PLC_CellValue_2[400]='%DB2:928.0:INT[100]'

The error I find on the wire is the same. Pleas note that in the list of variables there are no array of 400 samples but, I suppose, that the sum of all the request fire the same bug about the request of more than 240 byte length.
I have attached a wireshark capture of this scenario too.

Hope this analysis could help to find an universal solution for this problem.

Regards,
Stefano

P.S. As usual I am using the HellpPlc4x code and the latest compiled version of the 0.8.0 library.

Imported from Jira PLC4X-241. Original Jira may contain additional context.
Reported by: fox_pluto.

Implement the Profinet Protocol

In contrast to the already implemented S7 communication, the Profinet Protocol is a different Protocol with different feature set. In order to implement a Profinet driver it is required to become Member of the Profinet Consortium. This membership is ties with an annual membership fee. The ASF could become a member. The problem with this is however that the ASF doesn't become member in things that cost the ASF money. I have discussed this issue with the CEO of Profinet Europe and there might be an option for the ASF to become a member and someone else paying the bill. Only members seem to be allowed to advertise with the Profinet Logo and call their products Profinet Compatible.

Imported from Jira PLC4X-8. Original Jira may contain additional context.
Reported by: cdutz.

Modbus TCP Timeout not working. NIFI Processor Task gets stuck

When using V0.9.0 in PLC4x NIFI-Processor for Modbus TCP, the Processor Task gets stuck, if the network connection is broken. The default timeout and the request-timeout option do not seem to have any affect on this behaviour.

If the NIFI-Processor is started, when the network connection is already broken, a timeout error is thrown. If the connection breaks after a successful initial connection could be made, the running Task gets stuck on the next Modbus TCP Request and has to be forced to terminate.

Using V0.8.0 is working fine in the same scenario with same config in NIFI. In Production, V0.9.0 with Modbus TCP was not usable, since connections broke every day. We had to revert to V0.8.0. This was tested with the PLC4x Build from https://search.maven.org/search?q=plc4j-nifi-plc4x-nar and the official latest NIFI (V1.14.0) Docker and Windows Version 

Imported from Jira PLC4X-321. Original Jira may contain additional context.
Reported by: zedman.

Reading LREAL from S7 causes PlcRuntimeException

Reading an LREAL in my demo project https://github.com/sewiendl/plc4j-demo/tree/feature/read-lreal causes the following exception:```
org.apache.plc4x.java.api.exceptions.PlcRuntimeException: Field 'LReal' could not be fetched, response
was INVALID_DATATYPE
at org.apache.plc4x.java.base.messages.DefaultPlcReadResponse.getFieldInternal(DefaultPlcReadResponse.java:577)
at
org.apache.plc4x.java.base.messages.DefaultPlcReadResponse.getObject(DefaultPlcReadResponse.java:81)
at
com.example.plc4jdemo.Main.main(Main.java:39)

The field was added with ```
                builder.addItem("LReal", "%DB101.DBX110:LREAL");

Imported from Jira PLC4X-154. Original Jira may contain additional context.
Reported by: swiendl.

Scraper Statistics show Incomplete Requests as Failed

Looking at the statistics debug log, the statistics occasionally report failed requests but on the next cycle may show as all have completed.

4 (3 success, 25.0 % failed, 0.0 % too slow)

Then on the next reporting cycle may show.

7 (7 success, 0.0 % failed, 0.0 % too slow)

I'm assuming that it is showing requests that haven't completed as failed requests when it shouldn't count incomplete ones.

 

plc4j/tools/scraper/src/main/java/org/apache/plc4x/java/scraper/ScraperImpl.java - Line 146.

Imported from Jira PLC4X-260. Original Jira may contain additional context.
Reported by: hutcheb.

Apache NiFi integration should allow Expression Language

Apache NiFi integration should allow us to use Expression Language for PLC connection string and PLC resource address string.

We sometimes need to get data from 100 different addresses. Current processors doesn't allow us to create those strings on the fly. So we need to enter them manually or we need utilize NiFi API to create flows automatically. 

If those parameters allows us to use expression language, we can read a list from csv file or from database and then we can read them in a loop in Apache NiFi.  

So it will be very handy feature to dynamically specify the connection string and the address string.

For starting point PutFile processor can be examined. This processor utilizes expression language for it's "Directory" parameter.

https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/PutFile.java

Notable Lines:

import org.apache.nifi.expression.ExpressionLanguageScope;

...

Define the parameter as expression language supported.

public static final PropertyDescriptor DIRECTORY = new PropertyDescriptor.Builder()
.name("Directory")
.description("The directory to which files should be written. You may use expression language such as /aa/bb/${path}")
.required(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.build();

...

And then script can be evaluated in the onTrigger event like

context.getProperty(DIRECTORY).evaluateAttributeExpressions(flowFile).getValue()

Imported from Jira PLC4X-196. Original Jira may contain additional context.
Reported by: turker.tunali.

S7 Cannot write string

Exception thrown when writing string type: java.nio.bufferoverflow exception

Imported from Jira PLC4X-318. Original Jira may contain additional context.
Reported by: fungoddd.

[S7] Writing byte array not working

I'm having some issues with writing a byte array using S7 driver to the S7-1200.

I'm a bit new to the communication with PLCs, so I'm not sure if I'm doing something wrong.

I've followed the example of how to write a byte array to the connection, however I'm getting an INTERNAL_ERROR as a response code from the library and in the Wireshark the status from the PLC is inconsistent data type. 

I've attached both the sample project that's not working for me, my definition of byte array on the PLC and also  Wireshark capture. 

It's possible that I'm not doing something right, however I can't figure it out. 

Writing of most of the single values is working, however the array is the problem. And as far as I understood from the code and documentation arrays should be supported. 

Any help would be appreciated.

Imported from Jira PLC4X-309. Original Jira may contain additional context.
Reported by: maidab.

Class casts in new opcua driver

Issue reported on mailing lists:


[nioEventLoopGroup-2-1] WARN io.netty.channel.DefaultChannelPipeline - An exceptionCaught() event was
fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline
did not handle the exception.
io.netty.handler.codec.DecoderException: java.lang.ClassCastException:
class org.apache.plc4x.java.opcua.readwrite.ServiceFault cannot be cast to class org.apache.plc4x.java.opcua.readwrite.ReadResponse
(org.apache.plc4x.java.opcua.readwrite.ServiceFault and org.apache.plc4x.java.opcua.readwrite.ReadResponse
are in unnamed module of loader 'app')
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:98)

       at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)

       at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)

       at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)

       at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)

       at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)

       at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
 
      at io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:103)
      
 at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)

       at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)

       at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)

       at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)

       at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)

       at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)

       at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)

       at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)

       at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)

       at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)

       at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)

       at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)

       at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.ClassCastException:
class org.apache.plc4x.java.opcua.readwrite.ServiceFault cannot be cast to class org.apache.plc4x.java.opcua.readwrite.ReadResponse
(org.apache.plc4x.java.opcua.readwrite.ServiceFault and org.apache.plc4x.java.opcua.readwrite.ReadResponse
are in unnamed module of loader 'app')
        at org.apache.plc4x.java.opcua.protocol.OpcuaProtocolLogic.lambda$read$0(OpcuaProtocolLogic.java:177)

       at org.apache.plc4x.java.opcua.context.SecureChannel.lambda$4(SecureChannel.java:212)
     
  at org.apache.plc4x.java.spi.Plc4xNettyWrapper.decode(Plc4xNettyWrapper.java:175)
        at io.netty.handler.codec.MessageToMessageCodec$2.decode(MessageToMessageCodec.java:81)

       at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)

       ... 23 more

Imported from Jira PLC4X-319. Original Jira may contain additional context.
Reported by: ldywicki.

Plc4x support for RTU/Serial communication for modbus protocol.

Hi,

   I am using plc4x for one of our projects to develop a module which uses plc4x to establish communication with a sensor device using modbus RTU/Serial communication.

I am able to communicate with the device with TCP transport but not able to do with RTU/Serial communication.

I tried to look for some example/documentation on how to use RTU/Serial communication in plc4x but could not find any information in the website or the web.

Also looking at the documentation at https://plc4x.apache.org/users/protocols/modbus.html it is not clear whether modbus supports serial transport as it only lists tcp and udp.

 
|Compatible Transports:| * tcp (Default Port: 502)

  • udp (Default Port: 502)|

Kindly request to clarify if serial is supported in modbus? if so could you please point me to an example/documentation which shall be used to understand how to use it for serial communication.

Anyways I tried to use the library/driver plc4j-transport-serial  https://plc4x.apache.org/users/transports/serial.html

to communicate with device in which the connection is established but it fails to read the data with following WANING.

 

2021-06-03-18:13:51.814 [nioEventLoopGroup-2-1] WARN  io.netty.channel.nio.NioEventLoop - Selector.select() returned prematurely 512 times in a row; rebuilding Selector org.apache.plc4x.java.transport.serial.SerialPollingSelector@28ecdc0d.
2021-06-03-18:13:59.630 [nioEventLoopGroup-2-1] WARN  io.netty.channel.nio.NioEventLoop - Selector.select() returned prematurely 512 times in a row; rebuilding Selector org.apache.plc4x.java.transport.serial.SerialPollingSelector@11c9a1fa.

My Sample code is as follows.

 

private void plcRtuReader() {
// unit-identifier=1&
String connectionString =
"modbus:serial://COM5?unit-identifier=1&baudRate=19200&stopBits=" + SerialPort.ONE_STOP_BIT
+ "&parityBits="
+ SerialPort.NO_PARITY + "&dataBits=8";
System.out.println("URL:" + connectionString);
try (PlcConnection plcConnection = new PlcDriverManager().getConnection(connectionString)) {

if (!plcConnection.getMetadata().canRead()) {
System.out.println("This connection doesn't support reading.");
return;
}

PlcReadRequest.Builder builder = plcConnection.readRequestBuilder();
builder.addItem("value-2", "input-register:1[2]");
PlcReadRequest readRequest = builder.build();

// CompletableFuture<? extends PlcReadResponse> asyncResponse = readRequest.execute();
PlcReadResponse response = readRequest.execute().get();
for (String fieldName : response.getFieldNames()) {
if (response.getResponseCode(fieldName) == PlcResponseCode.OK) {
int numValues = response.getNumberOfValues(fieldName);
// If it's just one element, output just one single line.
if (numValues == 1) {
System.out.println("Value[" + fieldName + "]: " + response.getObject(fieldName));
}
// If it's more than one element, output each in a single row.
else {
System.out.println("Value[" + fieldName + "]:");
for (int i = 0; i < numValues; i++) {
System.out.println(" - " + response.getObject(fieldName, i));
}
}
}
// Something went wrong, to output an error message instead.
else {
System.out.println(
"Error[" + fieldName + "]: " + response.getResponseCode(fieldName).name());
}
}

System.exit(0);
} catch (PlcConnectionException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
}

Thanks a lot for your help.

Regards,
Purushotham

Imported from Jira PLC4X-300. Original Jira may contain additional context.
Reported by: psham81.

Exception on S7 disconnect

My PLC4J demo project at https://github.com/sewiendl/plc4j-demo yields the following output after/during disconnecting from the PLC:```
all requests took PT2M6.261S

Nov 14, 2019 10:05:13 AM io.netty.channel.DefaultChannelPipeline onUnhandledInboundException
WARNUNG:
An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the
last handler in the pipeline did not handle the exception.
java.io.IOException: Eine vorhandene Verbindung
wurde vom Remotehost geschlossen
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at
sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at
sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288)
at
io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1125)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:347)
at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)

disconnected

This bug report relates to my questions on the mailing list. I will also try to hand in a wireshark dump later.

Imported from Jira [PLC4X-153](https://issues.apache.org/jira/browse/PLC4X-153). Original Jira may contain additional context.
Reported by: swiendl.

ADS connection issue, Help wanted

I ran twincat simulator on my local host machine with ip 192.168.x.x subnetwork and the similator has the ip address 172.21.97.81, and then i have used the ads server connection string: ads:tcp://localhost/172.21.97.81.1.1:851, which seems not to be connected and i receive the error message as shown in the logs. Can someone point out what the problem is or the bug is ?

 

Best Regards

Vikram Gopu 

 

 

 

[main] INFO org.apache.plc4x.java.PlcDriverManager - Instantiating new PLC Driver Manager with class loader jdk.internal.loader.ClassLoaders$AppClassLoader@2626b418[main] INFO org.apache.plc4x.java.PlcDriverManager - Instantiating new PLC Driver Manager with class loader jdk.internal.loader.ClassLoaders$AppClassLoader@2626b418[main] INFO org.apache.plc4x.java.PlcDriverManager - Registering available drivers...[main] INFO org.apache.plc4x.java.PlcDriverManager - Registering driver for Protocol modbus (Modbus (TCP / Serial))[main] INFO org.apache.plc4x.java.PlcDriverManager - Registering driver for Protocol s7 (Siemens S7 (Basic))[main] INFO org.apache.plc4x.java.PlcDriverManager - Registering driver for Protocol ads (Beckhoff Twincat ADS)[main] INFO org.apache.plc4x.java.scraper.config.triggeredscraper.ScraperConfigurationTriggeredImpl - Assuming job as triggered job because triggerConfig has been set[main] INFO org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl - Starting jobs...[main] INFO org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperImpl - Task TriggeredScraperTask{driverManager=org.apache.plc4x.java.utils.connectionpool.PooledPlcDriverManager@4b9e255, jobName='ScheduleJob', connectionAlias='DeviceSource', connectionString='ads:tcp://localhost/172.21.97.81.1.1:851', requestTimeoutMs=1000, executorService=java.util.concurrent.ThreadPoolExecutor@5e57643e[Running, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], resultHandler=eu.cloudplug.cpe.plc4x.PLC4XScrapper$$Lambda$67/0x0000000800bcac40@133e16fd, triggerHandler=org.apache.plc4x.java.scraper.triggeredscraper.triggerhandler.TriggerHandlerImpl@51b279c9} added to scheduling[triggeredscraper-scheduling-thread-1] WARN org.apache.plc4x.java.scraper.triggeredscraper.TriggeredScraperTask - Exception during scraping of Job ScheduleJob, Connection-Alias DeviceSource: Error-message: null - for stack-trace change logging to DEBUG[nioEventLoopGroup-3-1] WARN io.netty.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.io.netty.handler.codec.DecoderException: java.lang.IndexOutOfBoundsException at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:98) at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1421) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:697) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:632) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:549) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:511) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:830)Caused by: java.lang.IndexOutOfBoundsException at io.netty.buffer.EmptyByteBuf.readUnsignedIntLE(EmptyByteBuf.java:594) at org.apache.plc4x.java.ads.api.util.UnsignedIntLEByteValue.<init>(UnsignedIntLEByteValue.java:53) at org.apache.plc4x.java.ads.api.commands.types.Result.<init>(Result.java:43) at org.apache.plc4x.java.ads.api.commands.types.Result.of(Result.java:59) at org.apache.plc4x.java.ads.protocol.Ads2PayloadProtocol.handleADSReadWriteCommand(Ads2PayloadProtocol.java:367) at org.apache.plc4x.java.ads.protocol.Ads2PayloadProtocol.decode(Ads2PayloadProtocol.java:135) at org.apache.plc4x.java.ads.protocol.Ads2PayloadProtocol.decode(Ads2PayloadProtocol.java:42) at io.netty.handler.codec.MessageToMessageCodec$2.decode(MessageToMessageCodec.java:81) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88) ... 22 more

Imported from Jira PLC4X-217. Original Jira may contain additional context.
Reported by: vikram919.

The connection pool should respect the entire URL

Currently the Connection pool only seems to work on the connection string but ignoring the parameters. As the parameters might contain important information (Rack & Slot for S7 or other things like the location of the KNXProj File for KNX or EDE File locations for BACnet) it should treat the entire connection string as pool key. 

Imported from Jira PLC4X-170. Original Jira may contain additional context.
Reported by: cdutz.

(plc4j opcua driver) Unable to read when node id contains unicode characters

Unable to read when node id contains unicode characters.
For example:
The nodeid is ns=2;s=设备1, the read result will be NOT_FOUND.

Look at the generated class org.apache.plc4x.java.opcua.readwrite.PascalString, the getStringLength equals to string length, should this be equal to getBytes().length?

[S7] When trying to write to a S7 device and writing is not explicitly enabled, the PLC will respond with an error code

The S7 will respond with:

  • Error Class: 0x83
  • Error Code: 0x04 

if the user tries a write request and this is not explicitly enabled, we can definitely handle this in a nicer way.

To fix the problem, you need to select the PLC in TIA, go into the Properties dialog, select "Protection". You will probably notice there's an Access Level Table, but you need to scroll down (even if it looks as if there is nothing). There check the box in: "Permit access with PUT/GET communications from remote partner...."

Would be cool if we could give our users a hint on this.

Imported from Jira PLC4X-208. Original Jira may contain additional context.
Reported by: cdutz.

java.lang.ClassCastException: DefaultPlcSubscriptionField cannot be cast to class OpcuaField

Hi team -

i had followed the https://github.com/apache/plc4x/tree/rel/0.8/plc4j/examples/hello-world-plc4x-subscription and facing the below error .

version : plc4j-driver-opcua-0.8.0

java : 1.8

Exception in thread "main" java.util.concurrent.ExecutionException: java.lang.ClassCastException: org.apache.plc4x.java.spi.model.DefaultPlcSubscriptionField cannot be cast to org.apache.plc4x.java.opcua.protocol.OpcuaField

 

please kindly provide your suggestions.

Imported from Jira PLC4X-313. Original Jira may contain additional context.
Reported by: karacc.

Subscription to system events.

  • This version allows the subscription to the following types of events:
    MODE: Report status of PLC.
    SYS: Report system events.
    USR: Report user events.
    ALM_S: Report ALARM_SQ,ALARM_S,ALARM_SC,ALARM_DQ,ALARM_D.
    ALM_8: Report NOTIFY_8P, ALARM,ALARM_8P, NOTIFY

  • The ALM_S events are generated by the S7-300. The S7-400 PLCs support ALM_S and ALM_8.

  • S7-1200 PLCs do not have a message system.

  • It does not support messages for S7-1500. Next review.

Imported from Jira PLC4X-183. Original Jira may contain additional context.
Reported by: cgarcia.

Substitution of returning null-Values by throwing e.g. UnsupportedOperationException

abstract FieldItem class holds non abstract methods that return null values - this might cause strange behavior if some of the subClasses have not overridden those methods.

It may be more intuitive that instead of returning null-values an exception is thrown to make clear that operation fails due to a specific reason 

Imported from Jira PLC4X-63. Original Jira may contain additional context.
Reported by: timbo2k.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.