Skip to content

Conversation

@AmarTrebinjac
Copy link
Contributor

@AmarTrebinjac AmarTrebinjac commented Dec 14, 2025

Backend portion of chat gifs

I opted to implement caching with Redis because tenor recommends caching, due to its 1RPS rate limit. Also recommend to not cache too aggressively because rankings can change multiple times during the day, although how often was not really explained. I put 3 hours arbitrarily, but can be more or less.

@pulumi
Copy link

pulumi bot commented Dec 14, 2025

🍹 The Update (preview) for dailydotdev/api/prod (at c8e5c97) was successful.

Resource Changes

    Name                                                   Type                           Operation
~   vpc-native-hourly-notification-cron                    kubernetes:batch/v1:CronJob    update
~   vpc-native-clean-zombie-opportunities-cron             kubernetes:batch/v1:CronJob    update
+-  vpc-native-k8s-secret                                  kubernetes:core/v1:Secret      create-replacement
~   vpc-native-ws-deployment                               kubernetes:apps/v1:Deployment  update
~   vpc-native-generic-referral-reminder-cron              kubernetes:batch/v1:CronJob    update
-   vpc-native-api-db-migration-093f4d3b                   kubernetes:batch/v1:Job        delete
~   vpc-native-bg-deployment                               kubernetes:apps/v1:Deployment  update
+   vpc-native-api-clickhouse-migration-8395e149           kubernetes:batch/v1:Job        create
~   vpc-native-validate-active-users-cron                  kubernetes:batch/v1:CronJob    update
~   vpc-native-update-tags-str-cron                        kubernetes:batch/v1:CronJob    update
~   vpc-native-clean-stale-user-transactions-cron          kubernetes:batch/v1:CronJob    update
~   vpc-native-update-highlighted-views-cron               kubernetes:batch/v1:CronJob    update
~   vpc-native-update-tag-recommendations-cron             kubernetes:batch/v1:CronJob    update
~   vpc-native-deployment                                  kubernetes:apps/v1:Deployment  update
~   vpc-native-personalized-digest-cron                    kubernetes:batch/v1:CronJob    update
~   vpc-native-update-views-cron                           kubernetes:batch/v1:CronJob    update
~   vpc-native-daily-digest-cron                           kubernetes:batch/v1:CronJob    update
~   vpc-native-clean-zombie-users-cron                     kubernetes:batch/v1:CronJob    update
~   vpc-native-generate-search-invites-cron                kubernetes:batch/v1:CronJob    update
~   vpc-native-post-analytics-history-day-clickhouse-cron  kubernetes:batch/v1:CronJob    update
~   vpc-native-post-analytics-clickhouse-cron              kubernetes:batch/v1:CronJob    update
~   vpc-native-temporal-deployment                         kubernetes:apps/v1:Deployment  update
~   vpc-native-personalized-digest-deployment              kubernetes:apps/v1:Deployment  update
~   vpc-native-user-profile-updated-sync-cron              kubernetes:batch/v1:CronJob    update
~   vpc-native-update-source-public-threshold-cron         kubernetes:batch/v1:CronJob    update
~   vpc-native-private-deployment                          kubernetes:apps/v1:Deployment  update
-   vpc-native-api-clickhouse-migration-093f4d3b           kubernetes:batch/v1:Job        delete
+   vpc-native-api-db-migration-8395e149                   kubernetes:batch/v1:Job        create
~   vpc-native-sync-subscription-with-cio-cron             kubernetes:batch/v1:CronJob    update
~   vpc-native-update-current-streak-cron                  kubernetes:batch/v1:CronJob    update
~   vpc-native-update-source-tag-view-cron                 kubernetes:batch/v1:CronJob    update
~   vpc-native-calculate-top-readers-cron                  kubernetes:batch/v1:CronJob    update
~   vpc-native-clean-zombie-images-cron                    kubernetes:batch/v1:CronJob    update
~   vpc-native-clean-gifted-plus-cron                      kubernetes:batch/v1:CronJob    update
~   vpc-native-check-analytics-report-cron                 kubernetes:batch/v1:CronJob    update
~   vpc-native-clean-zombie-user-companies-cron            kubernetes:batch/v1:CronJob    update
~   vpc-native-update-trending-cron                        kubernetes:batch/v1:CronJob    update

@AmarTrebinjac AmarTrebinjac changed the title feat: gifs feat: gif endpoints Dec 14, 2025
@AmarTrebinjac AmarTrebinjac marked this pull request as ready for review December 16, 2025 10:33
@AmarTrebinjac AmarTrebinjac requested review from a team and capJavert as code owners December 16, 2025 10:33
import { tenorClient } from '../integrations/tenor';
import { logger } from '../logger';

export default async function (fastify: FastifyInstance): Promise<void> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should protect this endpoints (only logged in users) and also rate limit them ourself

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also why not gql? A lot of manual work here, error handling, schemas can be defined in gql and zod which already works there

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'd go for gql too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also why not gql? A lot of manual work here, error handling, schemas can be defined in gql and zod which already works there

What extra benefits would gql provide? We can use zod here if we want to as well. Personally I think it makes sense to use API endpoints for simple services like this

and also rate limit them ourself

I disagree with this. I want to let people search as freely as possible without adding more ways for the user to not get the desired result.
I think if it becomes a noticable issue we can request a higher rate limit from tenor

Copy link
Contributor

@capJavert capJavert Dec 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree with this. I want to let people search as freely as possible without adding more ways for the user to not get the desired result.
I think if it becomes a noticable issue we can request a higher rate limit from tenor

issue is that like this you expose a free endpoint for anyone to take, especially bad actors, we have to guard it at least somehow, limits do not need to be low, they can be generous but need to block overuse.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue is that like this you expose a free endpoint for anyone to take, especially bad actors, we have to guard it at least somehow, limits do not need to be low, they can be generous but need to block overuse.

Fair point, but won't checking for authenticated user already solve this problem?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

partially yes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looking now, is this what we use? https://developers.google.com/tenor/guides/rate-limits-and-caching#rate-limits

it has 1req/s, this could break even if 2 users try to do it in parallel?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@capJavert Correct, I mentioned that in my OP. Realistically I don't think its going to be that huge of a problem because of debouncing and caching. Also, we don't even have 1 comment per minute on avg. on our site 😅

I think the announcement post will be the best test to see if its an actual issue, lol. I can imagine people will be posting gifs in the changelog.

Copy link
Contributor

@capJavert capJavert Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense 🤞

const gifToToggle = req.body as Gif;
const gifs: Gif[] = [];
if (existingFavorites?.meta) {
gifs.push(...existingFavorites.meta.favorites);
Copy link
Contributor

@capJavert capJavert Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can be pretty big array no, if user favorites a lot? Would it not be better to save it as relation in the db

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe content preference or something

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in theory, yes, but also I'm not sure I see a problem with it.
I think Discord does it in a similar way because I have a lot of favorites, and they load them all 😅

We could maybe limit the amount of favorites users can have to 50 or something, but honestly I think we can just keep as is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes but I assure you discord does not store it in single json row on the server, you can limit it keep it to 50, up to you but its just dooming/locking the feature unnecessarily from the beginning

also we can maybe just launch gifs without favorites and see where it goes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nah, I think the favorite gifs are an important aspect.

Reading around a bit, it seems like the JSONB causing issues would be very unlikely, as they would need to favorite an absolute massive amount of gifs.

If we ever run into this issue, I can look at maybe storing them differently. :)

@rebelchris rebelchris self-requested a review December 17, 2025 06:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants