Cast numeric fields with kafka connect and table.whitelist











up vote
0
down vote

favorite












I have created a source and a sink connector for kafka connect Confluent 5.0, to push two sqlserver tables to my datalake



Here is my SQLServer table schema :



CREATE TABLE MYBASE.dbo.TABLE1 (
id_field int IDENTITY(1,1) NOT NULL,
my_numericfield numeric(24,6) NULL,
time_field smalldatetime NULL,
CONSTRAINT PK_CBMARQ_F_COMPTEGA PRIMARY KEY (id_field)
) GO


My Cassandra schema :



create table TEST-TABLE1(my_numericfield decimal, id_field int, time_field timestamp, PRIMARY KEY (id_field));


Here is the source configuration, with a whitelist parameter :



{
"config":
{
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url": "jdbc:sqlserver://localhost:1433;database=MYBASE",
"connection.user": "admin",
"connection.password": "password",
"table.whitelist": "TABLE1, TABLE2",
"mode": "timestamp+incrementing",
"timestamp.column.name": "time_field",
"incrementing.column.name": "id_field",
"validate.non.null": "false",
"topic.prefix": "TEST-",
"tasks.max": "8",
"numeric.mapping":"best_fit"
},
"name": "sqlserver-MYBASE-test"
}


Here is my sink connector :



{
"name": "s3-sink-MYBASE",
"config":
{
"topics": "TEST-TABLE1, TEST_TABLE2",
"topics.dir": "DATABASE_FULL",
"s3.part.size": 5242880,
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"tasks.max": 8,
"schema.compatibility": "NONE",
"s3.region": "eu-central-1",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"format.class": "io.confluent.connect.s3.format.avro.AvroFormat",
"s3.bucket.name": "mydatalake",
"flush.size": 1,
"transforms":"InsertSourceDetails",
"transforms.InsertSourceDetails.type":"org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.InsertSourceDetails.static.field":"DATABASE",
"transforms.InsertSourceDetails.static.value":"MYBASE"
}
}


The problem is that some fields are typed NUMERIC in sqlserver, and kafka transforms them in BINARY when arrived in the datalake



Here is the schema_registry result :



{"type": "record",
"name": "TEST-TABLE1",
"fields": [
{
"name": "my_numericfield",
"type": [
"null",
{
"type": "bytes",
"scale": 6,
"precision": 64,
"connect.version": 1,
"connect.parameters": {
"scale": "6"
},
"connect.name": "org.apache.kafka.connect.data.Decimal",
"logicalType": "decimal"
}
],
"default": null
},
{
"name": "id_field",
"type": "int"
},
{
"name": "cbCreateur",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "time_field",
"type": [
"null",
{
"type": "long",
"connect.version": 1,
"connect.name": "org.apache.kafka.connect.data.Timestamp",
"logicalType": "timestamp-millis"
}
],
"default": null
},
],
"connect.name": "TEST-TABLE1"}


Here is the spark script and result :



...: from pyspark.sql.functions import col 
...: AWS_ID='xxxxxxxxxxxxxxxxx'
...: AWS_KEY='xxxxxxxxxxxxxxxxx/'
...: sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", AWS_ID)
...: sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", AWS_KEY)
...: sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "s3.eu-central-1.amazonaws.com")
...: spark.conf.set('spark.cassandra.connection.host', 'localhost')
...: spark.conf.set('spark.cassandra.connection.port', 9042)
...: spark.conf.set('spark.cassandra.auth.username', 'cassandra')
...: spark.conf.set('spark.cassandra.auth.password', 'cassandra')
...:
...:

: F_TEST-TABLE1 = spark.read.format('com.databricks.spark.avro').load('s3a://mydatalake/DATABASE_FULL/TEST-TABLE1').drop('partition')
...: DF_TEST-TABLE1 = F_TEST-TABLE1.toDF(*[c.lower() for c in TEST-TABLE1.columns])
...:
...:

: DF_TEST-TABLE1.printSchema()
root
|-- my_numericfield: binary (nullable = true)
|-- id_field: integer (nullable = true)
|-- time_field: long (nullable = true)


: DF_TEST-TABLE1.createTempView("event")

: spark.sql("select * from event").show(1, False)
+----------------+--------+--------------+
||my_numericfield|id_field|time_field |
+----------------+-----------+-----------+
|[00] | 5 |1542733800000 |
+----------------+--------+--------------+
only showing top 1 row


: DF_TEST-TABLE1.write.format('org.apache.spark.sql.cassandra').options(keyspace='sage_full', table='f_test-table1').option('confirm.truncate', True).save(mode='overwrite')
18/11/22 08:29:05 ERROR Executor: Exception in task 1.0 in stage 2.0 (TID 3)
com.datastax.spark.connector.types.TypeConversionException: Cannot convert object [B@6d0d5743 of type class [B to java.lang.BigDecimal.
at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:45)
at scala.PartialFunction$AndThen.applyOrElse(PartialFunction.scala:190)


I'm trying to cast the fields on the fly to match a numeric type (i.e. float), but i can't find a way to do it without knowing the field names in advance



With the whitelist parameter, the connector process the two tables without field description in the connector configuration



Is there a way to do the cast for all NUMERIC fields on the fly ?



Thanks for your help










share|improve this question
























  • Hello cricket_007 I'm editing the post
    – Ftagn
    Nov 22 at 7:42












  • I answered similar question several days ago: stackoverflow.com/questions/53390352/… - it looks like that the error in Kafka Connect
    – Alex Ott
    Nov 22 at 12:19










  • Thanks for the link. I was looking for something like org.apache.kafka.connect.transforms.Cast$Key, but i don't know if it is possible with the use of table.whitelist (without knowing field names)
    – Ftagn
    Nov 22 at 15:32










  • All transforns are possible with all other settings, but it might be best to handle this with a Spark UDF, similar to code Alex showed (not sure the Python equivalent to BigDecimal would be though)
    – cricket_007
    Nov 22 at 16:54












  • I tried "transforms":"Cast", "transforms.Cast.type":"org.apache.kafka.connect.transforms.Cast$Value", "transforms.Cast.spec":"my_numericfield:float64" but the source connector have failed with org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler at [...] at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.connect.errors.DataException: Unexpected type in Cast transformation: BYTES
    – Ftagn
    Nov 22 at 17:26

















up vote
0
down vote

favorite












I have created a source and a sink connector for kafka connect Confluent 5.0, to push two sqlserver tables to my datalake



Here is my SQLServer table schema :



CREATE TABLE MYBASE.dbo.TABLE1 (
id_field int IDENTITY(1,1) NOT NULL,
my_numericfield numeric(24,6) NULL,
time_field smalldatetime NULL,
CONSTRAINT PK_CBMARQ_F_COMPTEGA PRIMARY KEY (id_field)
) GO


My Cassandra schema :



create table TEST-TABLE1(my_numericfield decimal, id_field int, time_field timestamp, PRIMARY KEY (id_field));


Here is the source configuration, with a whitelist parameter :



{
"config":
{
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url": "jdbc:sqlserver://localhost:1433;database=MYBASE",
"connection.user": "admin",
"connection.password": "password",
"table.whitelist": "TABLE1, TABLE2",
"mode": "timestamp+incrementing",
"timestamp.column.name": "time_field",
"incrementing.column.name": "id_field",
"validate.non.null": "false",
"topic.prefix": "TEST-",
"tasks.max": "8",
"numeric.mapping":"best_fit"
},
"name": "sqlserver-MYBASE-test"
}


Here is my sink connector :



{
"name": "s3-sink-MYBASE",
"config":
{
"topics": "TEST-TABLE1, TEST_TABLE2",
"topics.dir": "DATABASE_FULL",
"s3.part.size": 5242880,
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"tasks.max": 8,
"schema.compatibility": "NONE",
"s3.region": "eu-central-1",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"format.class": "io.confluent.connect.s3.format.avro.AvroFormat",
"s3.bucket.name": "mydatalake",
"flush.size": 1,
"transforms":"InsertSourceDetails",
"transforms.InsertSourceDetails.type":"org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.InsertSourceDetails.static.field":"DATABASE",
"transforms.InsertSourceDetails.static.value":"MYBASE"
}
}


The problem is that some fields are typed NUMERIC in sqlserver, and kafka transforms them in BINARY when arrived in the datalake



Here is the schema_registry result :



{"type": "record",
"name": "TEST-TABLE1",
"fields": [
{
"name": "my_numericfield",
"type": [
"null",
{
"type": "bytes",
"scale": 6,
"precision": 64,
"connect.version": 1,
"connect.parameters": {
"scale": "6"
},
"connect.name": "org.apache.kafka.connect.data.Decimal",
"logicalType": "decimal"
}
],
"default": null
},
{
"name": "id_field",
"type": "int"
},
{
"name": "cbCreateur",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "time_field",
"type": [
"null",
{
"type": "long",
"connect.version": 1,
"connect.name": "org.apache.kafka.connect.data.Timestamp",
"logicalType": "timestamp-millis"
}
],
"default": null
},
],
"connect.name": "TEST-TABLE1"}


Here is the spark script and result :



...: from pyspark.sql.functions import col 
...: AWS_ID='xxxxxxxxxxxxxxxxx'
...: AWS_KEY='xxxxxxxxxxxxxxxxx/'
...: sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", AWS_ID)
...: sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", AWS_KEY)
...: sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "s3.eu-central-1.amazonaws.com")
...: spark.conf.set('spark.cassandra.connection.host', 'localhost')
...: spark.conf.set('spark.cassandra.connection.port', 9042)
...: spark.conf.set('spark.cassandra.auth.username', 'cassandra')
...: spark.conf.set('spark.cassandra.auth.password', 'cassandra')
...:
...:

: F_TEST-TABLE1 = spark.read.format('com.databricks.spark.avro').load('s3a://mydatalake/DATABASE_FULL/TEST-TABLE1').drop('partition')
...: DF_TEST-TABLE1 = F_TEST-TABLE1.toDF(*[c.lower() for c in TEST-TABLE1.columns])
...:
...:

: DF_TEST-TABLE1.printSchema()
root
|-- my_numericfield: binary (nullable = true)
|-- id_field: integer (nullable = true)
|-- time_field: long (nullable = true)


: DF_TEST-TABLE1.createTempView("event")

: spark.sql("select * from event").show(1, False)
+----------------+--------+--------------+
||my_numericfield|id_field|time_field |
+----------------+-----------+-----------+
|[00] | 5 |1542733800000 |
+----------------+--------+--------------+
only showing top 1 row


: DF_TEST-TABLE1.write.format('org.apache.spark.sql.cassandra').options(keyspace='sage_full', table='f_test-table1').option('confirm.truncate', True).save(mode='overwrite')
18/11/22 08:29:05 ERROR Executor: Exception in task 1.0 in stage 2.0 (TID 3)
com.datastax.spark.connector.types.TypeConversionException: Cannot convert object [B@6d0d5743 of type class [B to java.lang.BigDecimal.
at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:45)
at scala.PartialFunction$AndThen.applyOrElse(PartialFunction.scala:190)


I'm trying to cast the fields on the fly to match a numeric type (i.e. float), but i can't find a way to do it without knowing the field names in advance



With the whitelist parameter, the connector process the two tables without field description in the connector configuration



Is there a way to do the cast for all NUMERIC fields on the fly ?



Thanks for your help










share|improve this question
























  • Hello cricket_007 I'm editing the post
    – Ftagn
    Nov 22 at 7:42












  • I answered similar question several days ago: stackoverflow.com/questions/53390352/… - it looks like that the error in Kafka Connect
    – Alex Ott
    Nov 22 at 12:19










  • Thanks for the link. I was looking for something like org.apache.kafka.connect.transforms.Cast$Key, but i don't know if it is possible with the use of table.whitelist (without knowing field names)
    – Ftagn
    Nov 22 at 15:32










  • All transforns are possible with all other settings, but it might be best to handle this with a Spark UDF, similar to code Alex showed (not sure the Python equivalent to BigDecimal would be though)
    – cricket_007
    Nov 22 at 16:54












  • I tried "transforms":"Cast", "transforms.Cast.type":"org.apache.kafka.connect.transforms.Cast$Value", "transforms.Cast.spec":"my_numericfield:float64" but the source connector have failed with org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler at [...] at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.connect.errors.DataException: Unexpected type in Cast transformation: BYTES
    – Ftagn
    Nov 22 at 17:26















up vote
0
down vote

favorite









up vote
0
down vote

favorite











I have created a source and a sink connector for kafka connect Confluent 5.0, to push two sqlserver tables to my datalake



Here is my SQLServer table schema :



CREATE TABLE MYBASE.dbo.TABLE1 (
id_field int IDENTITY(1,1) NOT NULL,
my_numericfield numeric(24,6) NULL,
time_field smalldatetime NULL,
CONSTRAINT PK_CBMARQ_F_COMPTEGA PRIMARY KEY (id_field)
) GO


My Cassandra schema :



create table TEST-TABLE1(my_numericfield decimal, id_field int, time_field timestamp, PRIMARY KEY (id_field));


Here is the source configuration, with a whitelist parameter :



{
"config":
{
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url": "jdbc:sqlserver://localhost:1433;database=MYBASE",
"connection.user": "admin",
"connection.password": "password",
"table.whitelist": "TABLE1, TABLE2",
"mode": "timestamp+incrementing",
"timestamp.column.name": "time_field",
"incrementing.column.name": "id_field",
"validate.non.null": "false",
"topic.prefix": "TEST-",
"tasks.max": "8",
"numeric.mapping":"best_fit"
},
"name": "sqlserver-MYBASE-test"
}


Here is my sink connector :



{
"name": "s3-sink-MYBASE",
"config":
{
"topics": "TEST-TABLE1, TEST_TABLE2",
"topics.dir": "DATABASE_FULL",
"s3.part.size": 5242880,
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"tasks.max": 8,
"schema.compatibility": "NONE",
"s3.region": "eu-central-1",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"format.class": "io.confluent.connect.s3.format.avro.AvroFormat",
"s3.bucket.name": "mydatalake",
"flush.size": 1,
"transforms":"InsertSourceDetails",
"transforms.InsertSourceDetails.type":"org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.InsertSourceDetails.static.field":"DATABASE",
"transforms.InsertSourceDetails.static.value":"MYBASE"
}
}


The problem is that some fields are typed NUMERIC in sqlserver, and kafka transforms them in BINARY when arrived in the datalake



Here is the schema_registry result :



{"type": "record",
"name": "TEST-TABLE1",
"fields": [
{
"name": "my_numericfield",
"type": [
"null",
{
"type": "bytes",
"scale": 6,
"precision": 64,
"connect.version": 1,
"connect.parameters": {
"scale": "6"
},
"connect.name": "org.apache.kafka.connect.data.Decimal",
"logicalType": "decimal"
}
],
"default": null
},
{
"name": "id_field",
"type": "int"
},
{
"name": "cbCreateur",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "time_field",
"type": [
"null",
{
"type": "long",
"connect.version": 1,
"connect.name": "org.apache.kafka.connect.data.Timestamp",
"logicalType": "timestamp-millis"
}
],
"default": null
},
],
"connect.name": "TEST-TABLE1"}


Here is the spark script and result :



...: from pyspark.sql.functions import col 
...: AWS_ID='xxxxxxxxxxxxxxxxx'
...: AWS_KEY='xxxxxxxxxxxxxxxxx/'
...: sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", AWS_ID)
...: sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", AWS_KEY)
...: sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "s3.eu-central-1.amazonaws.com")
...: spark.conf.set('spark.cassandra.connection.host', 'localhost')
...: spark.conf.set('spark.cassandra.connection.port', 9042)
...: spark.conf.set('spark.cassandra.auth.username', 'cassandra')
...: spark.conf.set('spark.cassandra.auth.password', 'cassandra')
...:
...:

: F_TEST-TABLE1 = spark.read.format('com.databricks.spark.avro').load('s3a://mydatalake/DATABASE_FULL/TEST-TABLE1').drop('partition')
...: DF_TEST-TABLE1 = F_TEST-TABLE1.toDF(*[c.lower() for c in TEST-TABLE1.columns])
...:
...:

: DF_TEST-TABLE1.printSchema()
root
|-- my_numericfield: binary (nullable = true)
|-- id_field: integer (nullable = true)
|-- time_field: long (nullable = true)


: DF_TEST-TABLE1.createTempView("event")

: spark.sql("select * from event").show(1, False)
+----------------+--------+--------------+
||my_numericfield|id_field|time_field |
+----------------+-----------+-----------+
|[00] | 5 |1542733800000 |
+----------------+--------+--------------+
only showing top 1 row


: DF_TEST-TABLE1.write.format('org.apache.spark.sql.cassandra').options(keyspace='sage_full', table='f_test-table1').option('confirm.truncate', True).save(mode='overwrite')
18/11/22 08:29:05 ERROR Executor: Exception in task 1.0 in stage 2.0 (TID 3)
com.datastax.spark.connector.types.TypeConversionException: Cannot convert object [B@6d0d5743 of type class [B to java.lang.BigDecimal.
at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:45)
at scala.PartialFunction$AndThen.applyOrElse(PartialFunction.scala:190)


I'm trying to cast the fields on the fly to match a numeric type (i.e. float), but i can't find a way to do it without knowing the field names in advance



With the whitelist parameter, the connector process the two tables without field description in the connector configuration



Is there a way to do the cast for all NUMERIC fields on the fly ?



Thanks for your help










share|improve this question















I have created a source and a sink connector for kafka connect Confluent 5.0, to push two sqlserver tables to my datalake



Here is my SQLServer table schema :



CREATE TABLE MYBASE.dbo.TABLE1 (
id_field int IDENTITY(1,1) NOT NULL,
my_numericfield numeric(24,6) NULL,
time_field smalldatetime NULL,
CONSTRAINT PK_CBMARQ_F_COMPTEGA PRIMARY KEY (id_field)
) GO


My Cassandra schema :



create table TEST-TABLE1(my_numericfield decimal, id_field int, time_field timestamp, PRIMARY KEY (id_field));


Here is the source configuration, with a whitelist parameter :



{
"config":
{
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url": "jdbc:sqlserver://localhost:1433;database=MYBASE",
"connection.user": "admin",
"connection.password": "password",
"table.whitelist": "TABLE1, TABLE2",
"mode": "timestamp+incrementing",
"timestamp.column.name": "time_field",
"incrementing.column.name": "id_field",
"validate.non.null": "false",
"topic.prefix": "TEST-",
"tasks.max": "8",
"numeric.mapping":"best_fit"
},
"name": "sqlserver-MYBASE-test"
}


Here is my sink connector :



{
"name": "s3-sink-MYBASE",
"config":
{
"topics": "TEST-TABLE1, TEST_TABLE2",
"topics.dir": "DATABASE_FULL",
"s3.part.size": 5242880,
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"tasks.max": 8,
"schema.compatibility": "NONE",
"s3.region": "eu-central-1",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"format.class": "io.confluent.connect.s3.format.avro.AvroFormat",
"s3.bucket.name": "mydatalake",
"flush.size": 1,
"transforms":"InsertSourceDetails",
"transforms.InsertSourceDetails.type":"org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.InsertSourceDetails.static.field":"DATABASE",
"transforms.InsertSourceDetails.static.value":"MYBASE"
}
}


The problem is that some fields are typed NUMERIC in sqlserver, and kafka transforms them in BINARY when arrived in the datalake



Here is the schema_registry result :



{"type": "record",
"name": "TEST-TABLE1",
"fields": [
{
"name": "my_numericfield",
"type": [
"null",
{
"type": "bytes",
"scale": 6,
"precision": 64,
"connect.version": 1,
"connect.parameters": {
"scale": "6"
},
"connect.name": "org.apache.kafka.connect.data.Decimal",
"logicalType": "decimal"
}
],
"default": null
},
{
"name": "id_field",
"type": "int"
},
{
"name": "cbCreateur",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "time_field",
"type": [
"null",
{
"type": "long",
"connect.version": 1,
"connect.name": "org.apache.kafka.connect.data.Timestamp",
"logicalType": "timestamp-millis"
}
],
"default": null
},
],
"connect.name": "TEST-TABLE1"}


Here is the spark script and result :



...: from pyspark.sql.functions import col 
...: AWS_ID='xxxxxxxxxxxxxxxxx'
...: AWS_KEY='xxxxxxxxxxxxxxxxx/'
...: sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", AWS_ID)
...: sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", AWS_KEY)
...: sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "s3.eu-central-1.amazonaws.com")
...: spark.conf.set('spark.cassandra.connection.host', 'localhost')
...: spark.conf.set('spark.cassandra.connection.port', 9042)
...: spark.conf.set('spark.cassandra.auth.username', 'cassandra')
...: spark.conf.set('spark.cassandra.auth.password', 'cassandra')
...:
...:

: F_TEST-TABLE1 = spark.read.format('com.databricks.spark.avro').load('s3a://mydatalake/DATABASE_FULL/TEST-TABLE1').drop('partition')
...: DF_TEST-TABLE1 = F_TEST-TABLE1.toDF(*[c.lower() for c in TEST-TABLE1.columns])
...:
...:

: DF_TEST-TABLE1.printSchema()
root
|-- my_numericfield: binary (nullable = true)
|-- id_field: integer (nullable = true)
|-- time_field: long (nullable = true)


: DF_TEST-TABLE1.createTempView("event")

: spark.sql("select * from event").show(1, False)
+----------------+--------+--------------+
||my_numericfield|id_field|time_field |
+----------------+-----------+-----------+
|[00] | 5 |1542733800000 |
+----------------+--------+--------------+
only showing top 1 row


: DF_TEST-TABLE1.write.format('org.apache.spark.sql.cassandra').options(keyspace='sage_full', table='f_test-table1').option('confirm.truncate', True).save(mode='overwrite')
18/11/22 08:29:05 ERROR Executor: Exception in task 1.0 in stage 2.0 (TID 3)
com.datastax.spark.connector.types.TypeConversionException: Cannot convert object [B@6d0d5743 of type class [B to java.lang.BigDecimal.
at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:45)
at scala.PartialFunction$AndThen.applyOrElse(PartialFunction.scala:190)


I'm trying to cast the fields on the fly to match a numeric type (i.e. float), but i can't find a way to do it without knowing the field names in advance



With the whitelist parameter, the connector process the two tables without field description in the connector configuration



Is there a way to do the cast for all NUMERIC fields on the fly ?



Thanks for your help







sql-server cassandra avro apache-kafka-connect confluent






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 22 at 10:39

























asked Nov 21 at 21:24









Ftagn

15




15












  • Hello cricket_007 I'm editing the post
    – Ftagn
    Nov 22 at 7:42












  • I answered similar question several days ago: stackoverflow.com/questions/53390352/… - it looks like that the error in Kafka Connect
    – Alex Ott
    Nov 22 at 12:19










  • Thanks for the link. I was looking for something like org.apache.kafka.connect.transforms.Cast$Key, but i don't know if it is possible with the use of table.whitelist (without knowing field names)
    – Ftagn
    Nov 22 at 15:32










  • All transforns are possible with all other settings, but it might be best to handle this with a Spark UDF, similar to code Alex showed (not sure the Python equivalent to BigDecimal would be though)
    – cricket_007
    Nov 22 at 16:54












  • I tried "transforms":"Cast", "transforms.Cast.type":"org.apache.kafka.connect.transforms.Cast$Value", "transforms.Cast.spec":"my_numericfield:float64" but the source connector have failed with org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler at [...] at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.connect.errors.DataException: Unexpected type in Cast transformation: BYTES
    – Ftagn
    Nov 22 at 17:26




















  • Hello cricket_007 I'm editing the post
    – Ftagn
    Nov 22 at 7:42












  • I answered similar question several days ago: stackoverflow.com/questions/53390352/… - it looks like that the error in Kafka Connect
    – Alex Ott
    Nov 22 at 12:19










  • Thanks for the link. I was looking for something like org.apache.kafka.connect.transforms.Cast$Key, but i don't know if it is possible with the use of table.whitelist (without knowing field names)
    – Ftagn
    Nov 22 at 15:32










  • All transforns are possible with all other settings, but it might be best to handle this with a Spark UDF, similar to code Alex showed (not sure the Python equivalent to BigDecimal would be though)
    – cricket_007
    Nov 22 at 16:54












  • I tried "transforms":"Cast", "transforms.Cast.type":"org.apache.kafka.connect.transforms.Cast$Value", "transforms.Cast.spec":"my_numericfield:float64" but the source connector have failed with org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler at [...] at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.connect.errors.DataException: Unexpected type in Cast transformation: BYTES
    – Ftagn
    Nov 22 at 17:26


















Hello cricket_007 I'm editing the post
– Ftagn
Nov 22 at 7:42






Hello cricket_007 I'm editing the post
– Ftagn
Nov 22 at 7:42














I answered similar question several days ago: stackoverflow.com/questions/53390352/… - it looks like that the error in Kafka Connect
– Alex Ott
Nov 22 at 12:19




I answered similar question several days ago: stackoverflow.com/questions/53390352/… - it looks like that the error in Kafka Connect
– Alex Ott
Nov 22 at 12:19












Thanks for the link. I was looking for something like org.apache.kafka.connect.transforms.Cast$Key, but i don't know if it is possible with the use of table.whitelist (without knowing field names)
– Ftagn
Nov 22 at 15:32




Thanks for the link. I was looking for something like org.apache.kafka.connect.transforms.Cast$Key, but i don't know if it is possible with the use of table.whitelist (without knowing field names)
– Ftagn
Nov 22 at 15:32












All transforns are possible with all other settings, but it might be best to handle this with a Spark UDF, similar to code Alex showed (not sure the Python equivalent to BigDecimal would be though)
– cricket_007
Nov 22 at 16:54






All transforns are possible with all other settings, but it might be best to handle this with a Spark UDF, similar to code Alex showed (not sure the Python equivalent to BigDecimal would be though)
– cricket_007
Nov 22 at 16:54














I tried "transforms":"Cast", "transforms.Cast.type":"org.apache.kafka.connect.transforms.Cast$Value", "transforms.Cast.spec":"my_numericfield:float64" but the source connector have failed with org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler at [...] at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.connect.errors.DataException: Unexpected type in Cast transformation: BYTES
– Ftagn
Nov 22 at 17:26






I tried "transforms":"Cast", "transforms.Cast.type":"org.apache.kafka.connect.transforms.Cast$Value", "transforms.Cast.spec":"my_numericfield:float64" but the source connector have failed with org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler at [...] at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.connect.errors.DataException: Unexpected type in Cast transformation: BYTES
– Ftagn
Nov 22 at 17:26



















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














 

draft saved


draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53420662%2fcast-numeric-fields-with-kafka-connect-and-table-whitelist%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















 

draft saved


draft discarded



















































 


draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53420662%2fcast-numeric-fields-with-kafka-connect-and-table-whitelist%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

What visual should I use to simply compare current year value vs last year in Power BI desktop

Alexandru Averescu

Trompette piccolo