Restarting a server causes deployment to run again

Hello 👋 I have noticed that turning a Server OFF and ON on Hetzner might cause Dokploy to trigger a deployment when it is starting, although the "autodeploy" button is off. I expect that a new deployment is triggered only when: - i press the deployment button - when the autodeploy button is enabled and I push a commit I do not expect a new deployment to be triggered: - when I restart my server instance Other info: It tries to redeploy a commit that has already been deployed. (see image)
No description
3 Replies
predragnikolic
predragnikolicOP•8mo ago
While having dokploy dashboard open, and the network tab open. When I turn off the server on Hetzner and turn it on, I will that Dokply sends the following request at some interval, https://dokploy.xyz.com/api/trpc/auth.get,application.one,deployment.all I will see a few responses with status 527 from CloudFlare in the network panel than a few 502, than I can see the response:
[
// .. other data
{
"result": {
"data": {
"json": [
{
"deploymentId": "m9vhDhIEJsx3WuUJAjdpP",
"title": "remove newline",
"description": "Hash: f3a3fdb16bad9ed72081c4c416bd997078b82805",
"status": "done",
"logPath": "/etc/dokploy/logs/xyz-4468ea/xyz-4468ea-2024-09-12:17:57:35.log",
"applicationId": "KorYNyEwh0g2j4zWETFsf",
"composeId": null,
"createdAt": "2024-09-12T17:57:35.347Z"
},

[
// .. other data
{
"result": {
"data": {
"json": [
{
"deploymentId": "m9vhDhIEJsx3WuUJAjdpP",
"title": "remove newline",
"description": "Hash: f3a3fdb16bad9ed72081c4c416bd997078b82805",
"status": "done",
"logPath": "/etc/dokploy/logs/xyz-4468ea/xyz-4468ea-2024-09-12:17:57:35.log",
"applicationId": "KorYNyEwh0g2j4zWETFsf",
"composeId": null,
"createdAt": "2024-09-12T17:57:35.347Z"
},

then when the app calls https://dokploy.xyz.com/api/trpc/auth.get,application.one,deployment.all again it will create a new deployment: https://dokploy.xyz.com/api/trpc/auth.get,application.one,deployment.all
[
// .. other data
{
"result": {
"data": {
"json": [
{ // <-- notice this new deployment being created automatically on Dokploy startup
"deploymentId": "14d4DwUlQhyPn3jIqTEZx",
"title": "remove newline",
"description": "Hash: f3a3fdb16bad9ed72081c4c416bd997078b82805",
"status": "running",
"logPath": "/etc/dokploy/logs/xyz-4468ea/xyz-4468ea-2024-09-12:18:05:46.log",
"applicationId": "KorYNyEwh0g2j4zWETFsf",
"composeId": null,
"createdAt": "2024-09-12T18:05:46.069Z"
},
{
"deploymentId": "m9vhDhIEJsx3WuUJAjdpP",
"title": "remove newline",
"description": "Hash: f3a3fdb16bad9ed72081c4c416bd997078b82805",
"status": "done",
"logPath": "/etc/dokploy/logs/xyz-4468ea/xyz-4468ea-2024-09-12:17:57:35.log",
"applicationId": "KorYNyEwh0g2j4zWETFsf",
"composeId": null,
"createdAt": "2024-09-12T17:57:35.347Z"
},
[
// .. other data
{
"result": {
"data": {
"json": [
{ // <-- notice this new deployment being created automatically on Dokploy startup
"deploymentId": "14d4DwUlQhyPn3jIqTEZx",
"title": "remove newline",
"description": "Hash: f3a3fdb16bad9ed72081c4c416bd997078b82805",
"status": "running",
"logPath": "/etc/dokploy/logs/xyz-4468ea/xyz-4468ea-2024-09-12:18:05:46.log",
"applicationId": "KorYNyEwh0g2j4zWETFsf",
"composeId": null,
"createdAt": "2024-09-12T18:05:46.069Z"
},
{
"deploymentId": "m9vhDhIEJsx3WuUJAjdpP",
"title": "remove newline",
"description": "Hash: f3a3fdb16bad9ed72081c4c416bd997078b82805",
"status": "done",
"logPath": "/etc/dokploy/logs/xyz-4468ea/xyz-4468ea-2024-09-12:17:57:35.log",
"applicationId": "KorYNyEwh0g2j4zWETFsf",
"composeId": null,
"createdAt": "2024-09-12T17:57:35.347Z"
},
I do not understand why that happens. This code here looks like it might cause this:
export const deploymentWorker = new Worker(
"deployments",
async (job: Job<DeploymentJob>) => {
try {
if (job.data.applicationType === "application") {
await updateApplicationStatus(job.data.applicationId, "running");
if (job.data.type === "redeploy") {
await rebuildApplication({
applicationId: job.data.applicationId,
titleLog: job.data.titleLog,
descriptionLog: job.data.descriptionLog,
});
} else if (job.data.type === "deploy") {
await deployApplication({
applicationId: job.data.applicationId,
titleLog: job.data.titleLog,
descriptionLog: job.data.descriptionLog,
});
}
} else if (job.data.applicationType === "compose") {
await updateCompose(job.data.composeId, {
composeStatus: "running",
});
export const deploymentWorker = new Worker(
"deployments",
async (job: Job<DeploymentJob>) => {
try {
if (job.data.applicationType === "application") {
await updateApplicationStatus(job.data.applicationId, "running");
if (job.data.type === "redeploy") {
await rebuildApplication({
applicationId: job.data.applicationId,
titleLog: job.data.titleLog,
descriptionLog: job.data.descriptionLog,
});
} else if (job.data.type === "deploy") {
await deployApplication({
applicationId: job.data.applicationId,
titleLog: job.data.titleLog,
descriptionLog: job.data.descriptionLog,
});
}
} else if (job.data.applicationType === "compose") {
await updateCompose(job.data.composeId, {
composeStatus: "running",
});
// server.ts
app.prepare().then(async () => {
try {
// ... other code
server.listen(PORT);
console.log("Server Started:", PORT);
deploymentWorker.run(); // here the worker is run at startup
} catch (e) {
console.error("Main Server Error", e);
}
});
// server.ts
app.prepare().then(async () => {
try {
// ... other code
server.listen(PORT);
console.log("Server Started:", PORT);
deploymentWorker.run(); // here the worker is run at startup
} catch (e) {
console.error("Main Server Error", e);
}
});
I am not sure if it is better to be explicitly tell BullMQ that a job finished successfully by doing:
const worker = new Worker('my-queue', async job => {
try {
// Process the job...
await someProcessingFunction(job);

// Manually mark the job as completed
await job.moveToCompleted('done', true); // 'done' is the result,
const worker = new Worker('my-queue', async job => {
try {
// Process the job...
await someProcessingFunction(job);

// Manually mark the job as completed
await job.moveToCompleted('done', true); // 'done' is the result,
or by putting a check like this(but I would rather fix the bug in the root, rather that putting a check like this)
if (job.data.type === "deploy" || job.data.type === "redeploy") {
const currentDeployment = await findDeploymentById(job.data.deploymentId);
if (currentDeployment.status === "done") {
console.log("Deployment already completed, skipping...");
return;
}
// Continue with deployment or redeployment
}
if (job.data.type === "deploy" || job.data.type === "redeploy") {
const currentDeployment = await findDeploymentById(job.data.deploymentId);
if (currentDeployment.status === "done") {
console.log("Deployment already completed, skipping...");
return;
}
// Continue with deployment or redeployment
}
Siumauricio
Siumauricio•8mo ago
I guess there are options to prevent redis retry the deployment
predragnikolic
predragnikolicOP•8mo ago
https://docs.bullmq.io/bull/important-notes Maybe I hit this:
It's important to understand how locking works to prevent your jobs from losing their lock - becoming stalled - and being restarted as a result. ... Your job processor was too CPU-intensive and stalled the Node event loop, and as a result, Bull couldn't renew the job lock (see #488 for how we might better detect this). You can fix this by breaking your job processor into smaller parts so that no single part can block the Node event loop. Alternatively, you can pass a larger value for the lockDuration setting (with the tradeoff being that it will take longer to recognize a real stalled job).
Yes, building the app is CPU intensive, I need to scale the server instance up in order to build the app when I want to deploy a new version.

Did you find this page helpful?