You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/pages/en/showcases/fast-entires.mdx
+102-55
Original file line number
Diff line number
Diff line change
@@ -4,27 +4,18 @@ import { Guides } from '@/components/Guides'
4
4
5
5
exportconst description ='Implement message queues for better resource management'
6
6
7
-
# Fast Entires
7
+
# Fast Entries
8
8
9
-
## Issue{{not:'true'}}
9
+
## Issue
10
10
Sometimes it happens that people are impatient and write independent messages in a very short time gap preventing the bot to answer, which makes that each message is answered but not in the desired order.
For this other type of environments you can implement a functionality which you can create to have a margin of 3000ms for the user to write a message, and each time he writes a message in a time less than 3000ms it will accumulate all the messages and then after the margin time the bot will interpret everything as a single conversation.
Applying this implementation, what is achieved is that before passing to the processing stage, all independent messages (3) become one (1) and are processed as an independent message.
18
+
For this type of environment, we've implemented an enhanced functionality that introduces a margin of 3000ms for the user to write messages. Each time a user writes a message within this 3000ms window, it accumulates all the messages. After the margin time expires, the bot interprets everything as a single conversation.
28
19
29
20
```mermaid
30
21
flowchart LR
@@ -33,73 +24,109 @@ flowchart LR
33
24
n(User) -.m3.-> Q
34
25
```
35
26
36
-
In this example we say __3000ms__ which is equal to 3 seconds but you can modify this to your liking in `MESSAGE_GAP_SECONDS`
27
+
This implementation ensures that before passing to the processing stage, all independent messages (e.g., 3) become one (1) and are processed as a single message.
28
+
29
+
In this example, we use __3000ms__ (equal to 3 seconds) as the default gap, but you can modify this to your liking by adjusting the `gapSeconds` in the `QueueConfig`.
30
+
31
+
<VideoVerticallabel="Video Fast Entries"yt="hGTgQDALEmE"/>
37
32
38
33
<CodeGroup>
39
34
40
35
```ts {{ title: 'fast-entires.ts' }}
36
+
/**
37
+
* @file messageQueue.ts
38
+
* @description A functional implementation of a message queueing system with debounce functionality.
39
+
*/
40
+
41
41
interfaceMessage {
42
42
text:string;
43
43
timestamp:number;
44
44
}
45
45
46
-
const messageQueue:Message[] = [];
47
-
48
-
const MESSAGE_GAP_SECONDS =3000;
46
+
interfaceQueueConfig {
47
+
gapSeconds:number;
48
+
}
49
49
50
-
let messageTimer:NodeJS.Timeout|null=null;
50
+
interfaceQueueState {
51
+
queue:Message[];
52
+
timer:NodeJS.Timeout|null;
53
+
callback: ((body:string) =>void) |null;
54
+
}
51
55
52
-
/**
53
-
* Adds a message to the queue for later processing.
54
-
* @parammessageText The text of the message to add to the queue.
55
-
* @returns A promise that resolves when the message queue is processed.
Remember that this is an alternative solution, and it is possible that its implementation could be improved.
147
+
### Key Improvements in the New Implementation:
148
+
149
+
1.**Functional Approach**: The new implementation uses a functional programming style, which can lead to more predictable and testable code.
150
+
151
+
2.**Immutable State**: The state of the queue is managed immutably, which helps prevent unexpected side effects.
152
+
153
+
3.**Flexible Configuration**: The `QueueConfig` interface allows for easy adjustment of the gap time.
154
+
155
+
4.**Enhanced Error Handling**: The implementation includes try-catch blocks for better error management.
156
+
157
+
5.**Callback-based Processing**: Instead of returning a promise, the new implementation uses a callback function, allowing for more flexible message processing.
158
+
159
+
6.**Detailed Logging**: Console logs have been added at key points to aid in debugging and understanding the message flow.
160
+
161
+
Remember that while this implementation offers significant improvements, it's always possible to further optimize based on specific use cases and requirements.
0 commit comments