Prest/Trino编译调试经验(持续补充)

Presto(trino)编译调试经验(持续补充)

将Trino二次开发和调试过程中遇到的问题进行记录

trino运行报错跟踪

Trino 使用一段时间报错:

1
2
3
4
5
6
7
8
9
10
11

Caused by: java.lang.NoClassDefFoundError: io/trino/memory/QueryContext$QueryMemoryReservationHandler
at io.trino.memory.QueryContext.<init>(QueryContext.java:111)
at io.trino.execution.SqlTaskManager.createQueryContext(SqlTaskManager.java:196)
at io.trino.execution.SqlTaskManager.lambda$new$0(SqlTaskManager.java:162)
at com.google.common.cache.CacheLoader$FunctionToCacheLoader.load(CacheLoader.java:168)
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529)
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278)
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045)
... 81 more

重启后正常,应该是发包后没有重启,后续没有出现

trino 在idea下,无法debug运行,报错:jdk 下一些类找不到

idee > setting > complier > javac , 右侧module 使用的应该是 jdk8 ,全选 删除,重新再添加(需要JDK11)

trino 新增一个udf 模块,在mvn 编译时 无法通过一些plugin 检测

比如modernizer-maven-plugin

解决方案:在pom.xml 里面配置跳过

1
2
3
4
5
6
7
8
9
10
11
<build>
<plugins>
<plugin>
<groupId>org.gaul</groupId>
<artifactId>modernizer-maven-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
</plugins>
</build>

Trino 单独编译一个module

1
mvn -pl 'plugin/trino-udf' -am install -DskipTests

如果要跳过一个module 进行编译

1
mvn -pl '!doc' -am install -DskipTests

Trino Timestamp 转换问题

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
 public static long packDateTimeWithZone(long millisUtc, String zoneId)
{
return packDateTimeWithZone(millisUtc, getTimeZoneKey(zoneId));
}

public static long packDateTimeWithZone(long millisUtc, int offsetMinutes)
{
return packDateTimeWithZone(millisUtc, getTimeZoneKeyForOffset(offsetMinutes));
}

public static long packDateTimeWithZone(long millisUtc, TimeZoneKey timeZoneKey)
{
requireNonNull(timeZoneKey, "timeZoneKey is null");
return pack(millisUtc, timeZoneKey.getKey());
}



@LiteralParameters({"x", "p"})
@SqlType(StandardTypes.BIGINT)
public static long diff(
@SqlType("varchar(x)") Slice unit,
@SqlType("timestamp(p) with time zone") long packedEpochMillis1,
@SqlType("timestamp(p) with time zone") long packedEpochMillis2)
{
return getTimestampField(unpackChronology(packedEpochMillis1), unit)
.getDifferenceAsLong(unpackMillisUtc(packedEpochMillis2), unpackMillisUtc(packedEpochMillis1));
}

连接Kudu 使用时Timeout异常(实际是Kudu问题)

前端反馈有部分表无法进行查询,登录服务器发现很多异常日志,如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
2021-11-11T12:08:30.233+0800	ERROR	remote-task-callback-9651	io.trino.execution.StageStateMachine	Stage 20211111_040632_02070_78b4f.2 failed
java.lang.RuntimeException: org.apache.kudu.client.NonRecoverableException: cannot complete before timeout: ScanRequest(scannerId=null, tablet=de774c4553c64e2a9eb8a1f6c5e55027, attempt=75, KuduRpc(method=Scan, tablet=de774c4553c64e2a9eb8a1f6c5e55027, attempt=75, TimeoutTracker(timeout=120000, elapsed=117293), Trace Summary(48994 ms): Sent(34), Received(33), Delayed(33), MasterRefresh(0), AuthRefresh(0), Truncated: true
Sent: (aeb58702b4464202a204be1537cf45ad, [ Scan, 34 ])
Received: (aeb58702b4464202a204be1537cf45ad, [ UNINITIALIZED, 33 ])
Delayed: (UNKNOWN, [ Scan, 33 ])))
at io.trino.plugin.kudu.KuduRecordCursor.advanceNextPosition(KuduRecordCursor.java:133)
at io.trino.$gen.CursorProcessor_20211111_040450_1983.process(Unknown Source)
at io.trino.operator.ScanFilterAndProjectOperator$RecordCursorToPages.process(ScanFilterAndProjectOperator.java:323)
at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
at io.trino.operator.WorkProcessorUtils.getNextState(WorkProcessorUtils.java:221)
at io.trino.operator.WorkProcessorUtils$YieldingProcess.process(WorkProcessorUtils.java:181)
at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
at io.trino.operator.WorkProcessorUtils.getNextState(WorkProcessorUtils.java:221)
at io.trino.operator.WorkProcessorUtils.lambda$processStateMonitor$2(WorkProcessorUtils.java:200)
at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
at io.trino.operator.WorkProcessorUtils.lambda$flatten$6(WorkProcessorUtils.java:277)
at io.trino.operator.WorkProcessorUtils$3.process(WorkProcessorUtils.java:319)
at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
at io.trino.operator.WorkProcessorUtils$3.process(WorkProcessorUtils.java:306)
at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
at io.trino.operator.WorkProcessorUtils.getNextState(WorkProcessorUtils.java:221)
at io.trino.operator.WorkProcessorUtils.lambda$processStateMonitor$2(WorkProcessorUtils.java:200)
at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
at io.trino.operator.WorkProcessorUtils.getNextState(WorkProcessorUtils.java:221)
at io.trino.operator.WorkProcessorUtils.lambda$finishWhen$3(WorkProcessorUtils.java:215)
at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
at io.trino.operator.WorkProcessorSourceOperatorAdapter.getOutput(WorkProcessorSourceOperatorAdapter.java:149)
at io.trino.operator.Driver.processInternal(Driver.java:387)
at io.trino.operator.Driver.lambda$processFor$9(Driver.java:291)
at io.trino.operator.Driver.tryWithLock(Driver.java:683)
at io.trino.operator.Driver.processFor(Driver.java:284)
at io.trino.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:1075)
at io.trino.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:163)
at io.trino.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:484)
at io.trino.$gen.Trino_356_10_g5f7fa84_dirty____20211106_024354_2.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kudu.client.NonRecoverableException: cannot complete before timeout: ScanRequest(scannerId=null, tablet=de774c4553c64e2a9eb8a1f6c5e55027, attempt=75, KuduRpc(method=Scan, tablet=de774c4553c64e2a9eb8a1f6c5e55027, attempt=75, TimeoutTracker(timeout=120000, elapsed=117293), Trace Summary(48994 ms): Sent(34), Received(33), Delayed(33), MasterRefresh(0), AuthRefresh(0), Truncated: true
Sent: (aeb58702b4464202a204be1537cf45ad, [ Scan, 34 ])
Received: (aeb58702b4464202a204be1537cf45ad, [ UNINITIALIZED, 33 ])
Delayed: (UNKNOWN, [ Scan, 33 ])))
at org.apache.kudu.client.KuduException.transformException(KuduException.java:110)
at org.apache.kudu.client.KuduClient.joinAndHandleException(KuduClient.java:413)
at org.apache.kudu.client.KuduScanner.nextRows(KuduScanner.java:72)
at io.trino.plugin.kudu.KuduRecordCursor.advanceNextPosition(KuduRecordCursor.java:127)
... 32 more
Suppressed: org.apache.kudu.client.KuduException.OriginalException: Original asynchronous stack trace
at org.apache.kudu.client.AsyncKuduClient.tooManyAttemptsOrTimeout(AsyncKuduClient.java:1676)
at org.apache.kudu.client.AsyncKuduClient.delayedSendRpcToTablet(AsyncKuduClient.java:2121)
at org.apache.kudu.client.AsyncKuduClient.handleRetryableError(AsyncKuduClient.java:2045)
at org.apache.kudu.client.RpcProxy.dispatchTSError(RpcProxy.java:341)
at org.apache.kudu.client.RpcProxy.responseReceived(RpcProxy.java:269)
at org.apache.kudu.client.RpcProxy.access$000(RpcProxy.java:59)
at org.apache.kudu.client.RpcProxy$1.call(RpcProxy.java:149)
at org.apache.kudu.client.RpcProxy$1.call(RpcProxy.java:145)
at org.apache.kudu.client.Connection.messageReceived(Connection.java:390)
at org.apache.kudu.shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.apache.kudu.client.Connection.handleUpstream(Connection.java:238)
at org.apache.kudu.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.apache.kudu.shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.apache.kudu.shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.apache.kudu.shaded.org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
at org.apache.kudu.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.apache.kudu.shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.apache.kudu.shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.apache.kudu.shaded.org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.apache.kudu.shaded.org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.apache.kudu.shaded.org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.apache.kudu.shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.apache.kudu.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.apache.kudu.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.apache.kudu.shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.apache.kudu.shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.apache.kudu.shaded.org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.apache.kudu.shaded.org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.apache.kudu.shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.apache.kudu.shaded.org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.apache.kudu.shaded.org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.apache.kudu.shaded.org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.apache.kudu.shaded.org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
... 3 more
Caused by: org.apache.kudu.client.RecoverableException: safe time has not yet been initialized
at org.apache.kudu.client.RpcProxy.dispatchTSError(RpcProxy.java:341)
at org.apache.kudu.client.RpcProxy.responseReceived(RpcProxy.java:269)
at org.apache.kudu.client.RpcProxy.access$000(RpcProxy.java:59)
at org.apache.kudu.client.RpcProxy$1.call(RpcProxy.java:149)
at org.apache.kudu.client.RpcProxy$1.call(RpcProxy.java:145)
at org.apache.kudu.client.Connection.messageReceived(Connection.java:390)
at org.apache.kudu.shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.apache.kudu.client.Connection.handleUpstream(Connection.java:238)
at org.apache.kudu.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.apache.kudu.shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.apache.kudu.shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.apache.kudu.shaded.org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
at org.apache.kudu.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.apache.kudu.shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.apache.kudu.shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.apache.kudu.shaded.org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.apache.kudu.shaded.org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.apache.kudu.shaded.org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.apache.kudu.shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.apache.kudu.shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

关键日志在:

1
2
3
4
org.apache.kudu.client.NonRecoverableException: cannot complete before timeout: ScanRequest(scannerId=null, tablet=de774c4553c64e2a9eb8a1f6c5e55027, attempt=75, KuduRpc(method=Scan, tablet=de774c4553c64e2a9eb8a1f6c5e55027, attempt=75, TimeoutTracker(timeout=120000, elapsed=117293), Trace Summary(48994 ms): Sent(34), Received(33), Delayed(33), MasterRefresh(0), AuthRefresh(0), Truncated: true
Sent: (aeb58702b4464202a204be1537cf45ad, [ Scan, 34 ])
Received: (aeb58702b4464202a204be1537cf45ad, [ UNINITIALIZED, 33 ])
Delayed: (UNKNOWN, [ Scan, 33 ])))

这个可以判断 presto 将任务发给kudu后出现了超时

经过修改kudu 超时参数,改用简单查询,发现问题并没有解决

1
kudu.client.default-socket-read-timeout = 120s

然后只能去查kudu的问题

1
2
3
W1111 11:38:17.955163  4567 consensus_peers.cc:458] T a80592a4b94c439791c296ca320d2527 P 0bb10cb547e547aab4fbc7875730dacd -> Peer 941d23f7d6484d3684b1a4d0c90b843a (presto-3:7050): Couldn't send request to peer 941d23f7d6484d3684b1a4d0c90b843a. Status: Remote error: Service unavailable: Soft memory limit exceeded (at 101.02% of capacity). This is attempt 130211: this message will repeat every 5th retry.
W1111 11:38:17.983256 4566 consensus_peers.cc:458] T 0601c2387a374c048d1e49aa3a35d490 P 0bb10cb547e547aab4fbc7875730dacd -> Peer aeb58702b4464202a204be1537cf45ad (presto-1:7050): Couldn't send request to peer aeb58702b4464202a204be1537cf45ad. Status: Remote error: Service unavailable: Soft memory limit exceeded (at 100.93% of capacity). This is attempt 12276: this message will repeat every 5th retry.
W1111 11:38:18.022200 4742 maintenance_manager.cc:457] System under memory pressure (100.70% of limit used). However, there are no ops currently runnable which would free memory.

解决方案(待验证

kudu报错:Remote error Service unavailable Soft memory limit exceeded

Kudu都有一个硬性和软性的内存限制。

硬存储器限制是Kudu进程允许使用的最大数量,由–memory_limit_hard_bytes标志控制。

软内存限制是由硬盘内存限制的一个百分比,由–memory_limit_soft_percentage标志控制,memory_limit_soft_percentage默认值为80%,决定进程在开始拒绝某些写入操作之前可能使用的内存量。

那么Kudu就会因为记忆背压而拒绝写入。这可能会导致写入超时。

有几种方法可以缓解kudu的内存压力:

  1. 修改配置

    如果主机有更多内存,将memory_limit_hard_bytes 从 4G–>8G

连接mysql 查询table is not existed

异常

show tables;

会显示全小写的表名

但是在用table时,会出现表不存在的异常

mysqltableexisted

解决方案:

修改连接器配置信息中的:

“case-insensitive-name-matching”:”true”

忽略大小写匹配,这样就可以用 全小写的表名了

连接器配置修改后需要重启

或者动态catalog api接口需要先 删除,再新增同名连接

trino 管理后台展示version时,包含Dirty

该版本来自 presto-main-*jar 文件中的 Manifest 文件。您在那里看到的版本是通过以下方式配置的:https : //github.com/airlift/airbase/blob/master/airbase/pom.xml#L507。${git.commit.id.describe} 的值似乎是由 git-commit-id-plugin 注入的,其值由命令 git describe --dirty 返回。我怀疑您对代码库进行了本地修改,因此当重新生成 jar 时,版本发生了变化。

解决方案

您好,请在 config.properties 文件中添加此属性

1
presto.version=0.211

经过尝试,并不生效,无效参数,导致trino无法启动

Configuration property ‘presto.version’ was not used

Configuration property ‘trino.version’ was not used

Configuration property ‘version’ was not used

暂时没有摸索出来,不知道是替换了单个module导致出现dirty ,整个presto编译不知道会不会出现,因为这个不影响所以没有特意去测试

Kudu的表在Presto中找不到的异常

1
2
3
4
5
6
7
#通过presto-cli查询
> show schemas;
只有information_schema

#通过kudu 命令查询
> kudu list table <master_nodes>
确定表是存在的

通过kudu 直接查找是存在的,但是kudu里面没有database/schema的概念,所以presto的kudu连接器增加了一个机制来模拟schema,默认所有的表归属到default schema下:

1
2
3
4
5
6
7
8
9
## Kudu does not support schemas, but the connector can emulate them optionally.
## By default, this feature is disabled, and all tables belong to the default schema.
## For more details see connector documentation.
#kudu.schema-emulation.enabled=false

## Prefix to use for schema emulation (only relevant if `kudu.schema-emulation.enabled=true`)
## The standard prefix is `presto::`. Empty prefix is also supported.
## For more details see connector documentation.
#kudu.schema-emulation.prefix=

当时kudu连接器的配置如下:

1
2
kudu.schema-emulation.enabled=true
kudu.schema-emulation.prefix=

也就是只有表名的前缀是 presto:: 才会被presto加载使用

这两参数可以用来做 选择性加载特定前缀的表

接入MySQL 库后,找不到表异常

mysql 是大小写敏感的,而presto 是不敏感的,在presto中schema 名和table名都会转为小写,这样mysql 表就找不到了

这时候在 mysql 连接器配置文件中增加配置:

1
2
# 表示忽略schema and table names大小写
case-insensitive-name-matching=true

如果要支持大小写(比如mysql中就有一样的表名(大小写不一样),那么可以添加 case-insensitive-name-matching.config-file 在配置文件中指定 schema或者table 名的mapping


Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×