AI can generate a thousand articles a minute. But it can't do your thinking for you. Hashnode is a community of builders, engineers, and tech leaders who blog to sharpen their ideas, share what they've learned, and grow alongside people who care about the craft.
Your blog is your reputation — start building it.
10h ago · 28 min read · Every payment system starts the same way: one table, one provider, ship it. Then the second provider arrives. Then retry logic. Then partial refunds. Then you realize the model you built on day one is
Join discussion
10h ago · 9 min read · Until now it was only possible to receive data from Binance via WebSocket. To send requests to the Binance API, for example to create or cancel orders, you always had to use the slower REST API. This
LOLaura and 1 more commented
4h ago · 7 min read · In this series, I’m learning Docker and Kubernetes security by building a phishing URL scanner and applying security practices along the way. This is Part 1, where everything runs locally on Minikube,
Join discussion1h ago · 3 min read · How We Reduced Customer Support Tickets by 40% with an AI Chatbot Built in Laravel In January 2026, our customer support team was drowning. We had 1,200 support tickets per month for a SaaS product with 8,000 active users. The average first-response ...
Join discussion10h ago · 4 min read · In my previous post, I explained why the JEPA architecture is such a promising lead for robotics. But between Yann LeCun’s theory and the first \(loss.backward()\), there is a massive wall: the data.
Join discussion
Obsessed with crafting software.
4 posts this month#OpenSource #AI #Security #Python
1 post this monthDeveloper & Genealogist | Local-first architecture, Web Workers, and AI integration.
1 post this monthObsessed with crafting software.
4 posts this month#OpenSource #AI #Security #Python
1 post this monthDeveloper & Genealogist | Local-first architecture, Web Workers, and AI integration.
1 post this monthMost are still shipping “AI add-ons.” The real shift happens when the whole workflow disappears into one action — that’s when users actually feel the value.
API docs get attention. The frontend/API contract usually doesn't. TypeScript helps, but types lie without runtime validation. The API returns an unexpected null, a renamed field, an edge case you never tested and your types had no idea. Zod fixes this. Parse at the boundary. If the API changes shape, you catch it at the schema. Not in a Sentry alert a week later. We do this with Next.js Server Actions too. The server/client boundary is the natural place to validate. Keep the schema next to the call. Documentation problem and type-safety problem are usually the same problem.
You’re definitely not alone that “Step 5 bottleneck” is where most AI-assisted teams hit reality. Right now, most teams aren’t fully automating reviews yet. The common pattern I’m seeing is a hybrid approach, not purely human or purely automated. What others are doing AI generates code → Automated checks (linting, tests, security, architecture rules) → Targeted human review (not full manual review) 👉 The key shift: humans review intent + architecture, not every line.
Nice first deployment walkthrough! One thing worth adding to this stack: set up an OAI (Origin Access Identity) or the newer OAC (Origin Access Control) so your S3 bucket stays fully private and only CloudFront can read from it. Without that, the bucket is publicly accessible even though CloudFront is in front. Also, consider adding a Cache-Control header strategy — setting immutable assets to max-age=31536000 with content hashing in filenames, and your index.html to no-cache so CloudFront always checks for the latest version. WAF is a solid move this early — most people skip it until they get hit with bot traffic.
AI coding works best when you treat it like a collaborator, not a shortcut—be specific with prompts, provide context (code, errors, goals), and always review the output for logic and security. The real productivity boost comes from combining AI speed with human judgment.
You're right—many AI agent problems stem from improper data, lack of domain knowledge, or inadequate integration rather than the model itself. Issues like poor training data, insufficient fine-tuning, or misaligned objectives often lead to suboptimal results. Addressing these foundational elements usually resolves most challenges with AI agents.
Most developers go in expecting magic. They come out wondering why their app still breaks. I spent a full month using AI coding assistants as my main workflow tool. The speed on boilerplate code alone
The confidence problem runs deeper than it looks. AI is optimised for plausibility, not correctness. The code looks structured, compiles fin...
It all depends how how you use the ai to code If you plan before hand all possible vulnerabilities the chances of breaking will be very low