Details
-
Type: Bug
-
Status: Closed
-
Priority: Default
-
Resolution: Duplicate
-
Affects Version/s: 2.1.0
-
Fix Version/s: None
-
Component/s: None
-
Labels:None
Description
This code validates the checksum of incoming messages:
Message.java:
private void validateCheckSum(*String* messageData) throws InvalidMessage { try { // Body length is checked at the protocol layer final int checksum = trailer.getInt(CheckSum.FIELD); if (checksum != *MessageUtils.checksum(messageData)*) { // message will be ignored if checksum is wrong or missing throw MessageUtils.newInvalidMessageException("Expected CheckSum=" + MessageUtils.checksum(messageData) + ", Received CheckSum=" + checksum + " in " + messageData, this); } } catch (final FieldNotFound e) { throw MessageUtils.newInvalidMessageException("Field not found: " + e.field + " in " + messageData, this); } }
And in MessageUtils calculation ends up here:
public static int checksum(*Charset charset*, String data, boolean isEntireMessage) { if (CharsetSupport.isStringEquivalent(charset)) { // optimization - skip charset encoding int sum = 0; int end = isEntireMessage ? data.lastIndexOf("\00110=") : -1; int len = end > -1 ? end + 1 : data.length(); for (int i = 0; i < len; i++) { sum += data.charAt(i); } return sum & 0xFF; // better than sum % 256 since it avoids overflow issues } return checksum(*data.getBytes(charset)*, isEntireMessage); }
So the problem here is that calculation happens NOT on raw network bytes but on a java String that is later transformed to bytes using CharsetSuport's charset, which can be not the same what original message producer used, and produce different bytes.
So in practice this can cause to situation where message with correct checksum will be ignored because was read by QFJ using incorrect charset.
Looks like I'm witnessing this situation. Use case is to use non-unicode symbol in message and different charsets on producer-consumer sides.
Attachments
Issue Links
- duplicates
-
QFJ-789 Fully support alternate encodings (charsets)
- Open