We were organizing a SDET Hackathon for our API.
We had to choose our DB Plan from Mongo Atlas.
There was M2, M5, M10 Plans
M2 and M5 are shared Clusters.
M10 was dedicated cluster, so went with that.
Tuesday:
On the day of hackathon, when we were testing, we found "Out of Memory Error" in Heroku.
We 3 of the Devs connected, and my teammate figured out file was not closed at one method.
I was quite surprised !. How come was it missed in Code Review ?..
We fixed it. There was another place where file was closed, but there was no try with resources or try-catch-finally.
After this Out of Memory Error was gone.
Thursday :
We were in Herokus Eco Plan.
Multiple times we had "Memory Exceeds Limit" Error.
And we were getting "Request Time out" Error multiple times.
For 100 member usage, definitely we should have upgraded to next level Plan and started the Hackathon.
We moved to Basic plan, it gives 512 MB same as previous plan, now we are able to see Metrics.
It was better. But still some times same set of errors happened.
Friday :
We analyzed whole API for possible memory leaks. Didnt find any.
We checked with my Friend who organized another API hackathon, participant strength was 150, there was not a single time Memory limit got exceeded, or Server went down.
May be we are processing PDF files, using Cache in our System.
We limited the Cache Size for UserDetails Object and MongoCache.
It didn't effect any.
I checked the response time of an end point that reads the test values in PDF, writes to DB.
1.5 to 2.5 Secs I was able to get depending on PDF Size.
We were checking Heroku guidelines for Memory Limit Error.
I checked Deployed Application Size in Heroku , It was 110 MB.
Saturday :
I called up a Friend who is an experienced Developer and explained about the Errors we are getting.
He saw our Metrics and told ur App starts with 256 MB. It was an Eye opener.
Meanwhile my Deployment partner tried to install a Memory Profiling tool and tried to analyse.
I called up another teammate, we did some analysis and she suggested to try, JAVA_TOOL_OPTIONS -XX:+UseContainerSupport
It is a JVM option that is particularly useful in containerized environments.
It enables certain optimizations and behaviors that are specific to running Java applications in containers. For example, it helps the JVM detect the amount of available memory in a container more accurately, which can be important for configuring the JVM's heap size.
After this also , memory usage didnt come down.
Sunday :
Got up in the morning, suddenly saw in application.properties that
spring.servlet.multipart.max-file-size=256MB
spring.servlet.multipart.max-request-size=256MB
we are just reading lab reports. Why should the max-file-size for multipart file property be 256 MB ?!:-(
Reports we had all are of size 1000 kb to 3000 kb ( 1 MB to 3 MB apprx)
And the maximum files uploaded at a time for a Patient is 5 Files. Why is the max-request-size set to 256MB ? 🥺
I changed,
spring.servlet.multipart.max-file-size=10MB
spring.servlet.multipart.max-request-size=50MB
Sunday and Monday load was average, No R14 or H13 Error!..Cool days..
Thank All the Friends who supported!..
Be Blessed!..
コメント