Coder Social home page Coder Social logo

jeff-zou / flink-connector-redis Goto Github PK

View Code? Open in Web Editor NEW
203.0 7.0 75.0 739 KB

Asynchronous flink connector based on the Lettuce, supporting sql join and sink, query caching and debugging.

License: Apache License 2.0

Java 100.00%
flink-connector flink-connector-redis lettuce redis flink flink-sql join

flink-connector-redis's Issues

同学,您这个项目引入了35个开源组件,存在4个漏洞,辛苦升级一下

检测到 jeff-zou/flink-connector-redis 一共引入了35个开源组件,存在4个漏洞

漏洞标题:Apache Commons Compress 安全漏洞
缺陷组件:org.apache.commons:[email protected]
漏洞编号:CVE-2021-35517
漏洞描述:Apache Commons Compress是美国阿帕奇(Apache)基金会的一个用于处理压缩文件的库。
Apache Commons Compress存在资源管理错误漏洞,该漏洞源于当读取特殊设计的TAR归档文件时,Compress可以分配大量内存,从而导致小输入出现内存不足错误。
影响范围:[1.1, 1.21)
最小修复版本:1.21
缺陷组件引入路径:org.apache.xsj:[email protected]>org.apache.flink:[email protected]>org.apache.flink:[email protected]>org.apache.commons:[email protected]

另外还有4个漏洞,详细报告:https://mofeisec.com/jr?p=aa6651

能在入redis中增加支持原始BYTES入redis的方法么?

能在入redis中增加原始BYTES,不转化为base64格式来存放。
这是我的代码。

    from pyflink.table import EnvironmentSettings, TableEnvironment, DataTypes
    from pyflink.table.udf import udf, udtf, TableFunction, ScalarFunction
    from pyflink.common import Row
    import pickle
    env_settings = EnvironmentSettings.in_streaming_mode()
    table_env = TableEnvironment.create(env_settings)
    statement_set = table_env.create_statement_set()
    configuration = table_env.get_config().get_configuration()
    configuration.set_string("pipeline.name", "测试将数据从数据库中同步到redis上-过程带特殊处理方法")
    configuration.set_string("execution.checkpointing.interval", "3s")

    class RedisForm(TableFunction):
        def __init__(self, key_name):
            self.key_name = key_name
        def eval(self, record: Row):
            redis_hash_value = pickle.dumps(record.as_dict())
            redis_hash_key = record[self.key_name]
            per_name = "manager-system:"
            redis_name = per_name + "qhdata_standard" + "." + "dim_addr_type" + ":" + "cur_code1"
            return redis_name, redis_hash_key, redis_hash_value

    table_env.execute_sql('''
            CREATE TABLE source_table (
            `rid` INT,
            update_time TIMESTAMP,
            create_time TIMESTAMP,
            cur_code STRING,
            cur_name STRING,
          PRIMARY KEY (`rid`)  NOT ENFORCED
        ) WITH (
            'connector' = 'mysql-cdc',
            'hostname'='192.168.1.1',
            'port' = '3308',
            'username'='root',
            'password'='123456',
            'database-name' = 'qhdata_standard',
            'table-name' = 'dim_addr_type',
            'server-time-zone' = 'Asia/Shanghai'
        );
    ''')

    table_env.execute_sql('''
            CREATE TABLE result_table (
            redis_name VARCHAR, 
            redis_key VARCHAR, 
            redis_value BYTES 
        ) WITH (
            'connector' = 'redis',
            'host'='192.168.1.1',
            'port' = '6379',
            'cluster-nodes' = '192.168.1.1:6379',
            'redis-mode' = 'cluster',
            'command'='hset'
        );
    ''')
    createForm = udtf(RedisForm("cur_code"), result_types=[DataTypes.STRING(), DataTypes.STRING(), DataTypes.BYTES()])
    source =table_env.from_path("source_table")
    result = source.flat_map(createForm)
    statement_set.add_insert("result_table", result)
    statement_set.execute()

在flink中计算完的数据格式为x'80049596000000000000007d9...
但是连接器会转为base64。所以对这种原始数据是BYTES能不能提供一种直接放进去的选择,不需要转换。
谢谢大佬了。

1.3.1 版本 能写入数据,但是读取不到数据 要怎么处理

用自带例子 ,收到数据

-- 创建表
create table sink_redis(uid VARCHAR,score double,score2 double )
with ( 'connector' = 'redis',
'host' = '10.11.69.176',
'port' = '6379',
'redis-mode' = 'single',
'password' = '****',
'command' = 'SET',
'value.data.structure' = 'row'); -- 'value.data.structure'='row':整行内容保存至value并以'\01'分割
-- 写入测试数据,score、score2为需要被关联查询出的两个维度
insert into sink_redis select * from (values ('1', 10.3, 10.1));

-- 在redis中,value的值为: "1\x0110.3\x0110.1" --
-- 写入结束 --

-- create join table --
create table join_table with ('command'='get', 'value.data.structure'='row') like sink_redis

-- create result table --
create table result_table(uid VARCHAR, username VARCHAR, score double, score2 double) with ('connector'='print')

-- create source table --
create table source_table(uid VARCHAR, username VARCHAR, proc_time as procTime()) with ('connector'='datagen', 'fields.uid.kind'='sequence', 'fields.uid.start'='1', 'fields.uid.end'='2')

-- 关联查询维表,获得维表的多个字段值 --
insert
into
result_table
select
s.uid,
s.username,
j.score, -- 来自维表
j.score2 -- 来自维表
from
source_table as s
join join_table for system_time as of s.proc_time as j on
j.uid = s.uid

hash结构怎么查询

求助image
没看明白怎么查询hash结构,我数据里有两个Field,
维表是这么定义的
image
使用的时候报错
image

哨兵模式下连接失败

1.编写flinksql如下
image
使用的依赖包:
image
将依赖包放到flink/lib下面,使用standalone模式进行提交。出现报错如下:
2023-04-07 15:44:52
org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:301)
at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:291)
at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:282)
at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:739)
at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:443)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:304)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:302)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at akka.actor.Actor.aroundReceive(Actor.scala:537)
at akka.actor.Actor.aroundReceive$(Actor.scala:535)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
at akka.actor.ActorCell.invoke(ActorCell.scala:548)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
at akka.dispatch.Mailbox.run(Mailbox.scala:231)
at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: io.lettuce.core.RedisConnectionException: Cannot connect to a Redis Sentinel: [redis://@10.16.216.111:26379, redis://@10.16.216.112:26379, redis://@10.16.216.113:26379]
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:72)
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:56)
at io.lettuce.core.AbstractRedisClient.getConnection(AbstractRedisClient.java:350)
at io.lettuce.core.RedisClient.connect(RedisClient.java:216)
at io.lettuce.core.RedisClient.connect(RedisClient.java:201)
at org.apache.flink.streaming.connectors.redis.common.container.RedisContainer.open(RedisContainer.java:57)
at org.apache.flink.streaming.connectors.redis.table.RedisSinkFunction.open(RedisSinkFunction.java:282)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:100)
at org.apache.flink.table.runtime.operators.sink.SinkOperator.open(SinkOperator.java:58)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:107)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:703)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.call(StreamTaskActionExecutor.java:100)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:679)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:646)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.lettuce.core.RedisConnectionException: Cannot connect Redis Sentinel at redis://
@10.16.216.113:26379
at io.lettuce.core.RedisClient.lambda$connectSentinelAsync$6(RedisClient.java:527)
at reactor.core.publisher.Mono.lambda$onErrorMap$31(Mono.java:3776)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:106)
at reactor.core.publisher.Operators.error(Operators.java:198)
at reactor.core.publisher.MonoError.subscribe(MonoError.java:53)
at reactor.core.publisher.Mono.subscribe(Mono.java:4455)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:103)
at reactor.core.publisher.MonoCompletionStage.lambda$subscribe$0(MonoCompletionStage.java:86)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.AbstractRedisClient.lambda$null$5(AbstractRedisClient.java:470)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at io.lettuce.core.protocol.RedisHandshakeHandler.lambda$fail$4(RedisHandshakeHandler.java:139)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
at io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:184)
at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:95)
at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:30)
at io.lettuce.core.protocol.RedisHandshakeHandler.fail(RedisHandshakeHandler.java:138)
at io.lettuce.core.protocol.RedisHandshakeHandler.lambda$channelActive$3(RedisHandshakeHandler.java:107)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:778)
at java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2140)
at java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:110)
at io.lettuce.core.protocol.RedisHandshakeHandler.channelActive(RedisHandshakeHandler.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:230)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:216)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelActive(AbstractChannelHandlerContext.java:209)
at io.netty.channel.ChannelInboundHandlerAdapter.channelActive(ChannelInboundHandlerAdapter.java:69)
at io.lettuce.core.ChannelGroupListener.channelActive(ChannelGroupListener.java:57)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:230)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:216)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelActive(AbstractChannelHandlerContext.java:209)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelActive(DefaultChannelPipeline.java:1398)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:230)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:216)
at io.netty.channel.DefaultChannelPipeline.fireChannelActive(DefaultChannelPipeline.java:895)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:658)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
Caused by: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:650)
at io.netty.channel.AbstractChannel$AbstractUnsafe.handleWriteError(AbstractChannel.java:953)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:933)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.flush0(AbstractEpollChannel.java:557)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:895)
at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1372)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:750)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:742)
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:728)
at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:127)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:750)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:765)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:790)
at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:758)
at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:808)
at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1025)
at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:306)
at io.lettuce.core.RedisHandshake.dispatch(RedisHandshake.java:250)
at io.lettuce.core.RedisHandshake.dispatchHello(RedisHandshake.java:208)
at io.lettuce.core.RedisHandshake.initiateHandshakeResp3(RedisHandshake.java:196)
at io.lettuce.core.RedisHandshake.tryHandshakeResp3(RedisHandshake.java:97)
at io.lettuce.core.RedisHandshake.initialize(RedisHandshake.java:85)
at io.lettuce.core.protocol.RedisHandshakeHandler.channelActive(RedisHandshakeHandler.java:100)
... 21 more
Caused by: java.lang.UnsatisfiedLinkError: io.netty.channel.unix.Socket.sendAddress(IJII)I
at io.netty.channel.unix.Socket.sendAddress(Native Method)
at io.netty.channel.unix.Socket.sendAddress(Socket.java:302)
at io.netty.channel.epoll.AbstractEpollChannel.doWriteBytes(AbstractEpollChannel.java:362)
at io.netty.channel.epoll.AbstractEpollStreamChannel.writeBytes(AbstractEpollStreamChannel.java:260)
at io.netty.channel.epoll.AbstractEpollStreamChannel.doWriteSingle(AbstractEpollStreamChannel.java:471)
at io.netty.channel.epoll.AbstractEpollStreamChannel.doWrite(AbstractEpollStreamChannel.java:429)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:931)
... 41 more

How to use HSET with table format like this

I have table structured like this, and i would like to use HSET on Table SQL. Do you support this or any suggestions

key field1 field2 field3
1 'a' 1 'aa'
2 'b' 1 'bb'
What i am trying to do is to run this command
HSET 1 field1 'a' field2 1 field3 'aa'
HSET 2 field1 'b' field2 1 field3 'bb'

支持 Flink 1.15.x 下作为Lookup Table

现有程序跑在Flink 1.14.x下完全正常。但当跑在Flink 1.15.x下,会因为RedisDynamicTableFactory所依赖的Cache类是 com.google.common.cache.Cache 是旧版本Flink 1.14.x的包,新版本包路径已经改为:org.apache.flink.shaded.guava30.com.google.common.cache.Cache(依赖包是flink-shaded-guava),所以 flink-connector-redis 1.2.3 跑在 1.15.x 下时,如果只是作为sink是不会出错的,但当作为lookup table就会出错。

希望可以更新依赖包。

请问下不支持redis sentinel模式吗

sql客户端报错
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table fact ory for 'org.apache.flink.table.factories.TableSourceFactory' in
the classpath.

异常问题

Caused by: java.lang.VerifyError: Bad type on operand stack
Exception Details:
Location:
io/lettuce/core/resource/AddressResolverGroupProvider$DefaultDnsAddressResolverGroupWrapper.()V @51: putstatic
Reason:
Type 'io/netty/resolver/dns/DnsAddressResolverGroup' (current frame, stack[0]) is not assignable to 'io/netty/resolver/AddressResolverGroup'
Current Frame:
bci: @51
flags: { }
locals: { }
stack: { 'io/netty/resolver/dns/DnsAddressResolverGroup' }
Bytecode:
0x0000000: bb00 0259 bb00 0359 b700 04b8 0005 b600
0x0000010: 06b8 0007 1208 b600 09b6 000a bb00 0b59
0x0000020: b700 0cb6 000d bb00 0e59 b700 0fb6 0010
0x0000030: b700 11b3 0012 b1

at io.lettuce.core.resource.AddressResolverGroupProvider.<clinit>(AddressResolverGroupProvider.java:35)
at io.lettuce.core.resource.DefaultClientResources.<clinit>(DefaultClientResources.java:112)
at io.lettuce.core.AbstractRedisClient.<init>(AbstractRedisClient.java:122)
at io.lettuce.core.RedisClient.<init>(RedisClient.java:99)
at io.lettuce.core.RedisClient.create(RedisClient.java:136)
at org.apache.flink.streaming.connectors.redis.common.container.RedisCommandsContainerBuilder.build(RedisCommandsContainerBuilder.java:65)
at org.apache.flink.streaming.connectors.redis.common.container.RedisCommandsContainerBuilder.build(RedisCommandsContainerBuilder.java:34)
at org.apache.flink.streaming.connectors.redis.table.RedisSinkFunction.open(RedisSinkFunction.java:281)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:100)
at org.apache.flink.table.runtime.operators.sink.SinkOperator.open(SinkOperator.java:58)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:110)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:711)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:687)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:654)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
at java.lang.Thread.run(Thread.java:748)

flink版本1.14.5,connector使用的是git提供:https://github.com/jeff-zou/flink-connector-redis/releases/tag/1.2.5

[flink1.16.0]java.lang.UnsatisfiedLinkError: 'int io.netty.channel.unix.Socket.sendAddress(int, long, int, int)'

io.lettuce.core.RedisConnectionException: Unable to connect to x.x.x.x:6379
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:78)
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:56)
at io.lettuce.core.AbstractRedisClient.getConnection(AbstractRedisClient.java:350)
at io.lettuce.core.RedisClient.connect(RedisClient.java:215)
at io.lettuce.core.RedisClient.connect(RedisClient.java:200)
at org.apache.flink.streaming.connectors.redis.common.container.RedisContainer.open(RedisContainer.java:57)
at org.apache.flink.streaming.connectors.redis.table.RedisSinkFunction.open(RedisSinkFunction.java:312)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:100)
at org.apache.flink.table.runtime.operators.sink.SinkOperator.open(SinkOperator.java:58)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:107)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:726)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:702)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:669)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:650)
at io.netty.channel.AbstractChannel$AbstractUnsafe.handleWriteError(AbstractChannel.java:953)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:933)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.flush0(AbstractEpollChannel.java:557)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:895)
at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1372)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:921)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:907)
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:893)
at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:127)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:923)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:941)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:966)
at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:934)
at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:984)
at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1025)
at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:306)
at io.lettuce.core.RedisHandshake.dispatch(RedisHandshake.java:287)
at io.lettuce.core.RedisHandshake.dispatchHello(RedisHandshake.java:224)
at io.lettuce.core.RedisHandshake.initiateHandshakeResp3(RedisHandshake.java:212)
at io.lettuce.core.RedisHandshake.tryHandshakeResp3(RedisHandshake.java:102)
at io.lettuce.core.RedisHandshake.initialize(RedisHandshake.java:89)
at io.lettuce.core.protocol.RedisHandshakeHandler.channelActive(RedisHandshakeHandler.java:100)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:262)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:238)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelActive(AbstractChannelHandlerContext.java:231)
at io.netty.channel.ChannelInboundHandlerAdapter.channelActive(ChannelInboundHandlerAdapter.java:69)
at io.lettuce.core.ChannelGroupListener.channelActive(ChannelGroupListener.java:57)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:262)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:238)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelActive(AbstractChannelHandlerContext.java:231)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelActive(DefaultChannelPipeline.java:1398)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:258)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:238)
at io.netty.channel.DefaultChannelPipeline.fireChannelActive(DefaultChannelPipeline.java:895)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:658)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
Caused by: java.lang.UnsatisfiedLinkError: 'int io.netty.channel.unix.Socket.sendAddress(int, long, int, int)'
at io.netty.channel.unix.Socket.sendAddress(Native Method)
at io.netty.channel.unix.Socket.sendAddress(Socket.java:302)
at io.netty.channel.epoll.AbstractEpollChannel.doWriteBytes(AbstractEpollChannel.java:362)
at io.netty.channel.epoll.AbstractEpollStreamChannel.writeBytes(AbstractEpollStreamChannel.java:260)
at io.netty.channel.epoll.AbstractEpollStreamChannel.doWriteSingle(AbstractEpollStreamChannel.java:471)
at io.netty.channel.epoll.AbstractEpollStreamChannel.doWrite(AbstractEpollStreamChannel.java:429)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:931)
... 41 more

Flink sql表关联redis维表,redis中的数据更新了,但是flink计算时好像没有实时感知到

Flink sql表关联redis维表,redis中的数据更新了,但是flink计算时好像没有实时感知到,请问维表支持动态更新吗?如果支持的话该如何使用?
目前我的使用方式是select ...... from biz_table as a inner join redis_table as b for system_time as of a.proc_time as b on a.pair=b.pair and b.name='HASH_NAME'
其中的biz_table是一个临时表
redis_table是redis中的一个Hash,使用的redis command是hget
使用的版本是flink-connector-redis:1.2.1
flink版本1.14.5

Lettuce API Implicit Failure

Hi,
I've go through source code of RedisContainer, and may found a bug.

Since we relace Jedis as Lettuce as Redis client here, the error handling logic should be modify as well.
According to Lettuce: Error handling, if you use async api, then the error should be explicitly handled by apply handle() or exceptionally() method on the returned RedisFuture object, otherwise it will cause a implicit failure.

Discussion is welcome since I am new to async programming.

sink redis 找不到 connector

1、flink 版本 1.14.3 ,已经去除blink依赖打包
image
image

2、报错日志
org.apache.flink.table.api.ValidationException: Unable to create a sink for writing table 'default_catalog.default_database.sink_redis'.

Table options are:

'command'='hset'
'connector'='redis'
'host'='xxx'
'password'='******'
'port'='6379'
'redis-mode'='single'
at org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:184)
at org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:388)
at org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:222)
at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:101)
at org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:83)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.flink.table.planner.delegation.StreamPlanner.explain(StreamPlanner.scala:83)
at org.apache.flink.table.planner.delegation.StreamPlanner.explain(StreamPlanner.scala:47)
at com.dlink.executor.CustomTableEnvironmentImpl.explainSqlRecord(CustomTableEnvironmentImpl.java:281)
at com.dlink.executor.Executor.explainSqlRecord(Executor.java:249)
at com.dlink.explainer.Explainer.explainSql(Explainer.java:215)
at com.dlink.job.JobManager.explainSql(JobManager.java:474)
at com.dlink.service.impl.StudioServiceImpl.explainFlinkSql(StudioServiceImpl.java:167)
at com.dlink.service.impl.StudioServiceImpl.explainSql(StudioServiceImpl.java:154)
at com.dlink.controller.StudioController.explainSql(StudioController.java:51)
at sun.reflect.GeneratedMethodAccessor537.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:909)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at com.alibaba.druid.support.http.WebStatFilter.doFilter(WebStatFilter.java:124)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:382)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:895)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1732)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a connector using option: 'connector'='redis'
at org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:587)
at org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:561)
at org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:180)
... 73 more
Caused by: org.apache.flink.table.api.ValidationException: Could not find any factory for identifier 'redis' that implements 'org.apache.flink.table.factories.DynamicTableFactory' in the classpath.

Available factory identifiers are:

blackhole
clickhouse
datagen
elasticsearch-6
filesystem
jdbc
kafka
mysql-cdc
print
upsert-kafka
at org.apache.flink.table.factories.FactoryUtil.discoverFactory(FactoryUtil.java:399)
at org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:583)
... 75 more

3、sink redis sql
image

Redis Sink 支持批量提交和刷新提交吗?

hello,看你DataStreamTest 中的样例,中显示支持Sink 批量提交和时间刷新,但是table 的方式没有?

            new RedisCacheOptions.Builder().setCacheMaxSize(100).setCacheTTL(10L).build();

直接使用redis数据 查询语句出错。报: Cannot generate a valid execution plan for the given query

看例子都是用一个自动生成数据的源连接查询的。为什么不直接查询?

` StringBuilder redisOutTable = new StringBuilder("");
redisOutTable.append("create table sink_redis (id varchar, login_account varchar, nick_name varchar) ");
redisOutTable.append("with ( 'connector'='redis', 'host'='127.0.0.1','port'='6379'," +
" 'redis-mode'='single','password'='','command'='hget'," +
" 'maxIdle'='2', 'minIdle'='1', 'lookup.cache.max-rows'='10', 'lookup.cache.ttl'='10', 'lookup.max-retries'='3')");
envTable.executeSql(redisOutTable.toString());

    envTable.executeSql("create table result_table(uid VARCHAR, login_account VARCHAR, nick_name VARCHAR) with ('connector'='print')");
    envTable.executeSql("insert into result_table select id, login_account, nick_name from sink_redis;");`

异常信息:Exception in thread "main" org.apache.flink.table.api.TableException: Cannot generate a valid execution plan for the given query:

FlinkLogicalSink(table=[default_catalog.default_database.result_table], fields=[id, login_account, nick_name])
+- FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, sink_redis]], fields=[id, login_account, nick_name])

This exception indicates that the query uses an unsupported SQL feature.
Please check the documentation for the set of currently supported SQL features.
at org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:70)
at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:59)

sql-client执行hget的问题

我在redis中执行
hset school class1 stu1 class2 stu2
之后再sql-client执行
create table sink_redis_get (org varchar, sec_org varchar, v varchar) with ('connector'='redis','host'='localhost','port'='6379','password'='***','redis-mode'='single','command'='hget');
然后报错了。

[ERROR] Could not execute SQL statement. Reason: org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are not enough rules to produce a node with desired properties: convention=STREAM_PHYSICAL, FlinkRelDistributionTraitDef=any, MiniBatchIntervalTraitDef=None: 0, ModifyKindSetTraitDef=[NONE], UpdateKindTraitDef=[NONE]. Missing conversion is FlinkLogicalTableSourceScan[convention: LOGICAL -> STREAM_PHYSICAL] There is 1 empty subset: rel#170:RelSubset#6.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE], the relevant part of the original plan is as follows

batch模式支持吗

您好,flink1.14下,batch模式sink无数据,插件支持batch模式吗

Setting `command` option in TableDescriptor leads to Java error

sink = (
        TableDescriptor.for_connector("redis")
        .option("host", "redis")
        .option("port", "6379")
        .option("database", "1")
        .option("redis-mode", "single")
        .option("command", "set")
        .schema(
        Schema.new_builder()
            .column("k_", DataTypes.STRING())
            .column("v_", DataTypes.STRING())
            .build())
        .build()
    )
    statement_set.add_insert(sink, table)
    statement_set.attach_as_datastream()

When I tried to do like this, I receive the following error:

Caused by: java.lang.UnsupportedOperationException
        at java.base/java.util.Collections$UnmodifiableMap.put(Collections.java:1457)
        at org.apache.flink.streaming.connectors.redis.table.RedisDynamicTableFactory.createDynamicTableSink(RedisDynamicTableFactory.java:52)
        at org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:319)
        ... 28 more

This looks like we cannot modify a Collection Map type. I traced back the error and seems the following block is the culprit:

if (context.getCatalogTable().getOptions().containsKey(REDIS_COMMAND)) {
            context.getCatalogTable()
                    .getOptions()
                    .put(
                            REDIS_COMMAND,
                            context.getCatalogTable()
                                    .getOptions()
                                    .get(REDIS_COMMAND)
                                    .toUpperCase());
        }

Please help me with this issue.

Use redis pipeline instead of JedisPool

Redis on spark is much faster than flink (where it should be faster on flink than spark) since it uses pipeline.
If it is possible to use pipeline on flink it will be faster

对于redis消费相关问题

针对redis list的lpop和rpop 消费,如kafka一样去通过流消费,有没有加入这个项目的想法。

1.2.1版本sql-client查询数据报错

Flink SQL> select * from alarm_status_redis_dim;
[ERROR] Could not execute SQL statement. Reason:
org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are not enough rules to produce a node with desired properties: convention=STREAM_PHYSICAL, FlinkRelDistributionTraitDef=any, MiniBatchIntervalTraitDef=None: 0, ModifyKindSetTraitDef=[NONE], UpdateKindTraitDef=[NONE].
Missing conversion is FlinkLogicalTableSourceScan[convention: LOGICAL -> STREAM_PHYSICAL]
There is 1 empty subset: rel#553:RelSubset#14.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE], the relevant part of the original plan is as follows
540:FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, alarm_status_redis_dim]], fields=[alarmKey, alarmStatus])

Root: rel#551:RelSubset#15.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE]
Original rel:
FlinkLogicalSink(subset=[rel#117:RelSubset#1.LOGICAL.any.None: 0.[NONE].[NONE]], table=[default_catalog.default_database.sink_redis], fields=[alarmKey, alarmStatus]): rowcount = 1.0E8, cumulative cost = {1.0E8 rows, 1.0E8 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 121
FlinkLogicalTableSourceScan(subset=[rel#120:RelSubset#0.LOGICAL.any.None: 0.[NONE].[NONE]], table=[[default_catalog, default_database, per_source]], fields=[alarmKey, alarmStatus]): rowcount = 1.0E8, cumulative cost = {1.0E8 rows, 1.0E8 cpu, 1.6E9 io, 0.0 network, 0.0 memory}, id = 119

Sets:
Set#14, type: RecordType(VARCHAR(2147483647) alarmKey, INTEGER alarmStatus)
rel#548:RelSubset#14.LOGICAL.any.None: 0.[NONE].[NONE], best=rel#540
rel#540:FlinkLogicalTableSourceScan.LOGICAL.any.None: 0.[NONE].[NONE](table=[default_catalog, default_database, alarm_status_redis_dim],fields=alarmKey, alarmStatus), rowcount=1.0E8, cumulative cost={1.0E8 rows, 1.0E8 cpu, 1.6E9 io, 0.0 network, 0.0 memory}
rel#553:RelSubset#14.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE], best=null
Set#15, type: RecordType(VARCHAR(2147483647) alarmKey, INTEGER alarmStatus)
rel#550:RelSubset#15.LOGICAL.any.None: 0.[NONE].[NONE], best=rel#549
rel#549:FlinkLogicalSink.LOGICAL.any.None: 0.[NONE].[NONE](input=RelSubset#548,table=anonymous_collect$2,fields=alarmKey, alarmStatus), rowcount=1.0E8, cumulative cost={2.0E8 rows, 2.0E8 cpu, 1.6E9 io, 0.0 network, 0.0 memory}
rel#551:RelSubset#15.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE], best=null
rel#552:AbstractConverter.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE](input=RelSubset#550,convention=STREAM_PHYSICAL,FlinkRelDistributionTraitDef=any,MiniBatchIntervalTraitDef=None: 0,ModifyKindSetTraitDef=[NONE],UpdateKindTraitDef=[NONE]), rowcount=1.0E8, cumulative cost={inf}
rel#554:StreamPhysicalSink.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE](input=RelSubset#553,table=anonymous_collect$2,fields=alarmKey, alarmStatus), rowcount=1.0E8, cumulative cost={inf}

Graphviz:
digraph G {
root [style=filled,label="Root"];
subgraph cluster14{
label="Set 14 RecordType(VARCHAR(2147483647) alarmKey, INTEGER alarmStatus)";
rel540 [label="rel#540:FlinkLogicalTableSourceScan\ntable=[default_catalog, default_database, alarm_status_redis_dim],fields=alarmKey, alarmStatus\nrows=1.0E8, cost={1.0E8 rows, 1.0E8 cpu, 1.6E9 io, 0.0 network, 0.0 memory}",color=blue,shape=box]
subset548 [label="rel#548:RelSubset#14.LOGICAL.any.None: 0.[NONE].[NONE]"]
subset553 [label="rel#553:RelSubset#14.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE]",color=red]
}
subgraph cluster15{
label="Set 15 RecordType(VARCHAR(2147483647) alarmKey, INTEGER alarmStatus)";
rel549 [label="rel#549:FlinkLogicalSink\ninput=RelSubset#548,table=anonymous_collect$2,fields=alarmKey, alarmStatus\nrows=1.0E8, cost={2.0E8 rows, 2.0E8 cpu, 1.6E9 io, 0.0 network, 0.0 memory}",color=blue,shape=box]
rel552 [label="rel#552:AbstractConverter\ninput=RelSubset#550,convention=STREAM_PHYSICAL,FlinkRelDistributionTraitDef=any,MiniBatchIntervalTraitDef=None: 0,ModifyKindSetTraitDef=[NONE],UpdateKindTraitDef=[NONE]\nrows=1.0E8, cost={inf}",shape=box]
rel554 [label="rel#554:StreamPhysicalSink\ninput=RelSubset#553,table=anonymous_collect$2,fields=alarmKey, alarmStatus\nrows=1.0E8, cost={inf}",shape=box]
subset550 [label="rel#550:RelSubset#15.LOGICAL.any.None: 0.[NONE].[NONE]"]
subset551 [label="rel#551:RelSubset#15.STREAM_PHYSICAL.any.None: 0.[NONE].[NONE]"]
}
root -> subset551;
subset548 -> rel540[color=blue];
subset550 -> rel549[color=blue]; rel549 -> subset548[color=blue];
subset551 -> rel552; rel552 -> subset550;
subset551 -> rel554; rel554 -> subset553;
}

报错 no match redis

用的是flink-connector-redis-1.2.7-jar-with-dependencies.jar flink1.15
sql:

create table sink_redis(user_key VARCHAR, field VARCHAR, inform VARCHAR) with (
'connector' = 'redis',
'host' = 'xxxx',
'port' = 'xxxx',
'password' = 'xxx',
'database' = '8',
'ttl' = '86400',
--sink时key过期时间(秒)
'command' = 'hset'
);

报错:
Caused by: java.lang.RuntimeException: no match redis
at org.apache.flink.streaming.connectors.redis.common.hanlder.RedisHandlerServices.filterByContext(RedisHandlerServices.java:166)
at org.apache.flink.streaming.connectors.redis.common.hanlder.RedisHandlerServices.filter(RedisHandlerServices.java:90)
at org.apache.flink.streaming.connectors.redis.common.hanlder.RedisHandlerServices.findSingRedisHandler(RedisHandlerServices.java:76)
at org.apache.flink.streaming.connectors.redis.common.hanlder.RedisHandlerServices.findRedisHandler(RedisHandlerServices.java:58)
at org.apache.flink.streaming.connectors.redis.table.RedisDynamicTableSink.(RedisDynamicTableSink.java:42)
at org.apache.flink.streaming.connectors.redis.table.RedisDynamicTableFactory.createDynamicTableSink(RedisDynamicTableFactory.java:65)
at org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:259)

导入依赖运行test示例报错

我按照文档里面的说明把依赖导入idea里面,跑test里面的 set任务报错:

image

Caused by: java.lang.NoSuchMethodError: io.netty.util.CharsetUtil.encoder(Ljava/nio/charset/Charset;)Ljava/nio/charset/CharsetEncoder;

我引入netty-common 也不行(我看里面有netty-util),redis的连接信息没问题,帮忙看看吧,谢谢

Compatible between save on flink and spark

It is nice if the save on flink was similar with the same on spark (redis)
Indeed the save on flink it saves only two fields in addition of the key while it is possible that there are many keys/fields on the same entity and it could be simply done by :
hset(. .)
hset(. .)
......

hset命令下ttl不断重置的问题

写flink sql,使用hset命令,同时配置了ttl参数,但发现每次hset同一个key的时候,ttl会被重置,极限情况下,这个key可能永远都不会过期了,有没有相关参数可以避免这个情况

支持更新流写入么

定义redis的sink表代码:

  drop table if exists redis_sink;
  create table redis_sink ( 
    `additionalKey` STRING,
    `redisKey` STRING, 
    `redisValue` decimal(23, 8) ,
    
     PRIMARY KEY (`additionalKey`) NOT ENFORCED
  ) with ( 
    'connector' = 'redis', 
    'redis-mode' = 'cluster', 
    'cluster-nodes' = 'xxxx:6379', 
    'command' = 'HSET',
    'ttl'='1000'
    --'connector.property-version' = '1' 
  ); 
  insert into redis_sink 
  select
  'hellohash',
  create_time_day || userId as rediskey,
  
  sum(amount) as total_user_amount


    from (
        select 
        orderId,
        last_value(userId) as userId,
        last_value(amount) as amount,
        last_value(createTimestamp) as createTimestamp,
        REGEXP_EXTRACT(last_value(createTimestamp),'(.*?)[\sT](.+)',1) as `create_time_day`
        from test_topic_source
        where orderId is not null
        group by orderId
    ) group by create_time_day,userId

报错如下:
java.lang.IllegalStateException: please declare primary key for sink table when query contains update/delete record.
at org.apache.flink.util.Preconditions.checkState(Preconditions.java:193)
at org.apache.flink.streaming.connectors.redis.table.RedisDynamicTableSink.validatePrimaryKey(RedisDynamicTableSink.java:79)
at org.apache.flink.streaming.connectors.redis.table.RedisDynamicTableSink.getChangelogMode(RedisDynamicTableSink.java:53)
at org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visit(FlinkChangelogModeInferenceProgram.scala:125)
at org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram.optimize(FlinkChangelogModeInferenceProgram.scala:51)
at org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram.optimize(FlinkChangelogModeInferenceProgram.scala:40)
at org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)

连接器不支持连接哨兵模式集群?

Caused by: org.apache.flink.table.api.ValidationException: Unsupported options found for 'redis'.

Unsupported options:

master.name
sentinels.info
sentinels.password

Supported options:

cache.max-retries
cache.max-rows
cache.penetration.prevent
cache.ttl
cache.type
cluster-nodes
command
connector
database
field-column
host
key-column
lookup.hash.enable
lookup.redis.datatype
maxIdle
maxTotal
minIdle
password
port
property-version
put-if-absent
redis-mode
timeout
ttl
value-column

java.lang.VerifyError: Bad return type

create table sink_redis(
name string,
level string,
age string)
with (
'connector'='redis',
'host'='192.168.90.104',
'port'='6379',
'redis-mode'='single',
'password'='123456',
'command'='hset');

select * from sink_redis;

[ERROR] Could not execute SQL statement. Reason:
java.lang.VerifyError: Bad return type
Exception Details:
Location: org/apache/flink/streaming/connectors/redis/common/mapper/row/sink/RowRedisSinkMapper.getCommandDescription()Lorg/apache/flink/streaming/connectors/redis/common/mapper/RedisCommandBaseDescription; @4: areturn
Reason:
Type 'org/apache/flink/streaming/connectors/redis/common/mapper/RedisCommandDescription' (current frame, stack[0]) is not assignable to 'org/apache/flink/streaming/connectors/redis/common/mapper/RedisCommandBaseDescription' (from method signature)
Current Frame:
bci: @4
flags: { }
locals: { 'org/apache/flink/streaming/connectors/redis/common/mapper/row/sink/RowRedisSinkMapper' }
stack: { 'org/apache/flink/streaming/connectors/redis/common/mapper/RedisCommandDescription' }
Bytecode:
0x0000000: 2ab6 0013 b0

I used sql-client,Flink version is 1.13.6.

ttl怎么设置

hi,我看你代码里面有设置ttl的操作,但是flinksql怎么设置ttl啊,能给个具体的例子吗,谢谢

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.