Sunday, March 22, 2026

Web API | Http Status Codes

 HTTP Status Code Families

1xx – Informational
  • Request received, processing continues
  • Rarely used in practice
  • Example: 100 Continue

2xx – Success
  • Request was successfully processed
  • Most common success responses
  • Examples:
    • 200 OK → Standard success
    • 201 Created → Resource created
    • 204 No Content → Success, no response body

3xx – Redirection
  • Further action needed to complete request
  • Client must follow redirect
  • Examples:
    • 301 Moved Permanently
    • 302 Found (Temporary Redirect)
    • 304 Not Modified → Cache use

4xx – Client Errors
  • Problem with the request (client-side issue)
  • Examples:
    • 400 Bad Request → Invalid input
    • 401 Unauthorized → Authentication required
    • 403 Forbidden → Access denied
    • 404 Not Found → Resource doesn’t exist

5xx – Server Errors
  • Server failed to fulfill a valid request
  • Examples:
    • 500 Internal Server Error → Generic failure
    • 502 Bad Gateway → Invalid upstream response
    • 503 Service Unavailable → Server overloaded/down
    • 504 Gateway Timeout → Upstream timeout

Saturday, March 21, 2026

AI | Foundation Model and LLM

 Foundation Model (FM) = broader term:
  • Text (LLM)
  • Image
  • Audio
  • Video

LLM: AI model trained on huge amounts of text data to read, write, and understand language like humans

An LLM is a deep learning model trained on large text datasets to understand and generate human-like language.
Model works for textual content considered as LLM.

Break the term
  • Large → Trained on massive datasets (books, websites, code, etc.)
  • Language → Works with text (and sometimes speech)
  • Model → Mathematical system that predicts and generates words

How LLM works

An LLM works by:

  • Reading input text (your prompt)
  • Predicting the next most likely word
  • Repeating this to form sentences

Example:

Input: "The sky is"
LLM predicts → "blue"

Behind the scenes, it uses a concept called
Transformer architecture
(which helps it understand context and relationships between words).


Key Features of LLMs

1. Text Generation ✍️
  • Write essays, emails, code
2. Understanding Language 📖
  • Answer questions
  • Summarize content
3. Context Awareness 🧠
  • Remembers previous messages in a conversation
4. Multi-task Capability 🔄
  • Translation
  • Coding
  • Chatting
  • Reasoning

Examples of LLMs

Some popular LLMs include:

  • GPT models
  • Claude models
  • Llama models
  • Amazon Titan

Where LLMs are used
  • Chatbots (like ChatGPT)
  • Virtual assistants
  • Content writing tools
  • Customer support automation
  • Code generation
  • Search engines


    Model FamilyIs it LLM?Notes
    Llama (Meta) YesPure LLM
    GPT (OpenAI) YesPure LLM
    Claude (Anthropic) YesPure LLM
    DeepSeek YesPure LLM
    Titan / Nova (AWS) PartlyMix of LLM + other models

    AI | Application Architecture with Bedrock

    Amazon Bedrock architecture consists of a client application that sends prompts through an API layer to the Bedrock service, which processes the request using selected foundation models, optionally enriches it using knowledge bases (RAG), applies guardrails, and returns the generated response back to the user.

    Step-by-Step Explanation

    1. User / Application Layer

    This is where interaction starts:

    • Web app (chatbot UI)
    • Mobile app
    • Backend service

    User sends:

    • Prompt (e.g., “Explain AI”)
    • Query (e.g., search in documents)

    2. Frontend / API Layer

    Handles request input:

    • UI collects input
    • Sends request via API

    Common AWS tools used:

    • API Gateway
    • SDK (Python, Java, JS)

    3. Application Layer (Backend Logic)

    This layer prepares the request before sending to Bedrock:

    Tasks:

    • Format prompt
    • Add system instructions
    • Manage session memory
    • Call Bedrock API

    Example:

    User: "Summarize this document"
    Backend adds:
    "You are a helpful assistant..."

    Amazon Bedrock Core Layer 🧠

    This is the main engine.

    (A) Model Selection Layer

    You choose which model to use:

    • Claude
    • Titan
    • Llama
      etc.

    Same API → different models


    (B) Prompt Processing

    Bedrock processes:

    • Input prompt
    • Context (chat history)
    • Retrieved documents (if RAG)


    (C) Foundation Models (FMs)

    Actual AI models generate output:

    Types:

    • Text models (chat, code)
    • Image models
    • Embedding models

    (D) Agents (Optional but powerful)

    Agents can:

    • Call APIs
    • Query databases
    • Perform tasks

    Example:

    User: Book a ticket
    Agent → calls booking API → returns result

    (E) Guardrails & Security

    Controls:

    • Content filtering
    • Data privacy
    • Access control

    Ensures:

    • Safe responses
    • No harmful output

    5. Knowledge Base (RAG Layer)

    If enabled, Bedrock uses:

    • Vector database
    • Embeddings

    Flow:

    User question → Search documents → Retrieve relevant info → Send to model

    Connected data sources:

    • S3 (PDFs, docs)
    • Databases
    • APIs

    6. Data Sources Layer

    Where actual data is stored:

    • Files (PDF, Word)
    • Company databases
    • External APIs

    7. Response Generation

    Model generates:

    • Text answer
    • Image
    • Structured output

    8. Response Back to User

    Final output goes back through:

    Bedrock → Backend → Frontend → User

    Full Flow 
    User Input

    Frontend UI

    Backend (adds context)

    Bedrock API

    Model + Knowledge Base

    Generated Response

    User

    Key Components Summary

    ComponentPurpose
    FrontendUser interaction
    BackendPrompt processing
    BedrockAI engine
    ModelsGenerate output
    Knowledge BaseLong-term memory
    AgentsAutomation
    GuardrailsSafety

    AI | Bedrock

    Amazon Bedrock is a fully managed AWS service that allows developers to build generative-AI applications using multiple foundation models with built-in memory, model customization, security, and agents through a single API.

    Bedrock is an AWS service used to build generative-AI apps (chatbots, assistants, search, RAG, image generation, etc.) without training models or managing GPUs. Below is a clear explanation focusing especially on main features, models, memory (context), and how it works.




    1) What Amazon Bedrock is (simple definition)

    According to the official AWS docs, Amazon Bedrock is a fully managed service that gives access to multiple foundation models (FMs) through a single API and lets you build AI apps securely on AWS.

    That means:

    • You don’t train the model yourself
    • You just choose a model → send a prompt → get a response
    • AWS manages scaling, GPUs, and infrastructure.

    2) Main Features of Amazon Bedrock 

    (A) Multiple foundation models in one place

    Instead of using only one AI model, Bedrock gives you many models from different companies in a single platform.

    Examples of supported providers:

    • Amazon (Titan / Nova models)
    • Anthropic (Claude models)
    • Meta (Llama models)
    • AI21 Labs
    • Cohere
    • Stability AI
    • Mistral, DeepSeek, etc.

    So if one model is not good for your use case, you can switch without changing your code much.


    (B) Single API for all models

    You can call different models using the same API.

    Example:

    • Today you use Claude
    • Tomorrow you switch to Titan or Llama
      → Your application code stays mostly the same.

    (C) Model customization (very important feature)

    You can improve a model using your own data:

    1. Fine-tuning
      • Train a copy of the model using your dataset
    2. RAG (Retrieval Augmented Generation)
      • Instead of training, you connect documents (PDFs, websites, database)
    3. Knowledge Bases
      • Bedrock automatically stores documents, creates embeddings, and retrieves them when users ask questions.

    This is used for:

    • Chat with company documents
    • AI customer support
    • Enterprise search
    • Internal knowledge bots

    (D) Agents (AI that can perform tasks)

    Bedrock supports Agents, meaning AI can:

    • Read data
    • Call APIs
    • Perform actions (like booking, searching database, sending emails)

    AWS even provides:

    • Memory
    • Tool usage
    • Authentication
    • Monitoring for AI agents.

    (E) Built-in security and privacy 

    AWS clearly states:

    • Your data is not used to train the original models
    • Data stays inside your AWS account
    • Works with VPC (private network).


    3) Models in Amazon Bedrock

    Bedrock supports 100+ foundation models in a single catalog.

    Main types of models available:

    (1) Text / Chat models

    Examples:

    • Claude (Anthropic)
    • Amazon Titan Text
    • Llama models
    • AI21 Jamba
    • Cohere Command

    Used for:

    • Chatbots
    • Summarization
    • Code generation
    • Content writing

    (2) Image generation models

    Examples:

    • Titan Image Generator
    • Stability AI models

    Used for:

    • AI images
    • Marketing creatives
    • Design automation

    (3) Embedding models

    These convert text into vectors (numbers).

    Used for:

    • Semantic search
    • RAG
    • Recommendation systems

    4) Memory in Amazon Bedrock

    There are 3 types of memory used in Bedrock:

    (A) Context window (model memory)

    This is how much text the model can remember in one prompt.

    Example:

    • Some Claude models support very long context (like 200k tokens) → useful for large PDFs.

    So if you give:

    • Long document
    • Research paper
    • Entire chat history
      → The model can still understand it.

    (B) Session memory (conversation memory)

    Bedrock supports multi-turn conversations, meaning the model remembers previous messages in the same chat session.

    Example:
    User: Explain Python
    User: Now give example
    User: Now optimize the code

    The model remembers the previous messages.


    (C) Knowledge-base memory (long-term memory)

    This is used when you connect:

    • PDFs
    • Documents
    • Database
    • Company knowledge

    Bedrock stores the data in a vector database and retrieves it when needed.

    This works like:

    • Long-term memory for AI apps

    5) Why companies use Amazon Bedrock

    Because it provides:

    • No GPU setup
    • Multiple AI models in one place
    • Easy scaling
    • Secure enterprise environment
    • Supports chatbots + RAG + agents + image AI together

    Friday, March 13, 2026

    AWS | Use Cases : Lambda - Lambda(Container) - Containers(Farget/EC2) - EC2

     1. Use case for choosing Lambda over container-based deployment

    Choose AWS Lambda instead of containers when you want event-driven, serverless execution without managing infrastructure.

    Typical use cases

    1. Event-driven processing
    • Triggered by events from:

      • Amazon S3 uploads
      • Amazon DynamoDB streams
      • Amazon EventBridge
            Example: Resize images when uploaded to S3.
    2. Short-lived microservices

    • APIs running behind Amazon API Gateway
    • Small functions like validation, authentication, etc.

    3. Sporadic workloads

    • Jobs that run occasionally
    • No need to pay for idle infrastructure.

    4. Automatic scaling

    • Traffic spikes → Lambda scales automatically.

    Why Lambda here
    • No server management
    • Pay per execution
    • Built-in scaling


    2. Use case for choosing containers over Lambda

    Choose containers (like Amazon ECS, Amazon EKS, or Docker deployments) when workloads require more control and longer execution time.

    Typical use cases

    1. Long-running services
    • Backend APIs
    • Web applications
    • Streaming services

    Lambda has execution limits, while containers can run indefinitely.


    2. Custom runtime or dependencies

            If you need:
      • special OS libraries
      • GPU support
      • custom runtime environments

    Containers allow full environment control.


    3. Stateful or complex applications
    Examples:

    • Machine learning inference services
    • Video processing pipelines
    • background workers


    4. Consistent dev → prod environment
    Docker containers ensure the same environment everywhere.


    3. Use case for choosing EC2 over containers or Lambda

    Choose Amazon EC2 when you need full control of the infrastructure.

    Typical use cases

    1. Legacy applications

    Applications that:

    • cannot be containerized
    • require specific OS setups.


    2. Custom networking or OS configuration

    You need:
    • kernel modifications
    • custom drivers
    • advanced networking.


    3. Specialized hardware
    Examples:
    • GPU workloads
    • FPGA workloads
    • HPC computing.

    4. Stateful workloads
    Examples:

    • large databases
    • heavy caching systems

    4. Use case for choosing Fargate over ECS EC2

    Choose AWS Fargate instead of Amazon ECS with EC2 when you want containers without managing servers.

    When Fargate is better

    1. No infrastructure management

    You don’t need to:

    • patch servers
    • scale EC2
    • manage clusters.


    2. Simple microservices
    Perfect for:

    • containerized APIs
    • background jobs
    • microservices architecture.


    3. Variable workloads
    Fargate automatically scales tasks.


    When ECS EC2 is better

    Use ECS EC2 when:
    • you want lower cost at scale
    • you need GPU or specialized hardware
    • you want custom instance types


    5. Use case for deploying Lambda using containers

    AWS Lambda supports container images (up to 10GB).
    Zip based lambda supports max 250 MB.

    Use container-based Lambda when

    1. Large dependencies

    If your Lambda package exceeds normal limits.

    Example:

    • ML models
    • heavy Python libraries.

    2. Custom runtime
    You want:

    • custom Linux packages
    • special frameworks.


    3. Standardized CI/CD
    If your organization already uses:

    • Docker
    • container pipelines.


    4. Portability
    You can reuse the same container for:
    • Lambda
    • ECS
    • Kubernetes.


    Simple Decision Summary

    ScenarioBest Option
    Event-driven small tasksLambda
    Long-running microservicesContainers
    Full infrastructure controlEC2
    Containers without server managementFargate
    Large Lambda dependenciesLambda container image

    Simple rule many architects use:
    • Lambda → event-driven
    • Fargate → container microservices
    • EC2 → full control workloads

    AWS - Network Access Control List (ACL)

    A Network Access Control List is an optional layer of security to your VPC, that acts as firewall to subnet(s) to control In & Out traffic. A default ACL created with VPC, you can configure ACL as an additional layer of security.


    An ephemeral port is a short-lived endpoint that is created by the operating system when a program requests any available user port. The operating system selects the port number from a predefined range, typically between 1024 and 65535, and releases the port after the related TCP connection terminates.
    You can use it in outbound list of ACL.



    Facts About NACL
    1. NACL always evaluated before security groups. That means NACL filtered traffic reaches to security groups.
    2. NACL is stateless, which means, any inbound traffic do not have relative out bound traffic automatically, we need to create it.
    3. Default Network ACL created when you create VPC, that allows all inbound and outbound traffic, you can customize it as per your requirement.
    4. We can create a custom Network ACL that by default denies all inbound and outbound traffic, until we do not add rules to that.
    5. A Network ACL can be associated with multiple subnets but a subnet can have association with only one ACL. If you do not explicitly associate subnet to an ACL than it automatically associate with default ACL.
    6. Their are separate list of inbound and outbound rules.
    7. Inbound/Outbound lists are numbered rules list. that applies descending numbered rules.
      Ex.
      Rule No. 100 allowing http on port 80
      Rule No. 200 denying http on port 80
      Means ACL will allow http on 80
    8. You can block specific IPs on ACL but not on security groups.

    AWS - Pillars of Well Architected Framework

    The pillars of well architected framework

    1. Operational Excellence: Ability to run and maintain system, supporting business requirements and technology changes and continually improve supporting process and procedures. 

    • Monitoring system health
      • Business Matrix
      • Customer Experience Matrix 
      • System Matrix 
      • Operational Matrix
    It encompasses areas such as automation, change management, and continuous improvement.
    AWS Services: AWS CloudFormation, AWS CloudTrail, AWS Config, AWS Systems Manager, AWS CodePipeline, AWS CodeDeploy, AWS CodeBuild, AWS CloudWatch, AWS Lambda, AWS Step Functions.

    2. Security:

    It covers areas like identity and access management, data protection, and incident response.
    AWS Services: AWS IAM, AWS KMS, AWS CloudTrail, AWS WAF, AWS GuardDuty, AWS Secrets Manager, AWS Certificate Manager, AWS Security Hub, AWS Config, AWS Network Firewall, AWS Shield.

    3. Reliability: (Hint: HA + DR + Backups)

    It covers areas like fault tolerance, high availability, and backup and recovery strategies.
    AWS Services: AWS Auto Scaling, AWS Elastic Load Balancing, AWS Route 53, AWS CloudWatch, AWS Backup, AWS CloudFormation, AWS CloudTrail, AWS Lambda, AWS Step Functions, AWS SNS, AWS SQS.

    4. Performance Efficiency:

    It covers areas like selection of compute resources, caching, and monitoring.
    AWS Services: AWS EC2, AWS Auto Scaling, AWS ElastiCache, AWS CloudWatch, AWS CloudFront, AWS Global Accelerator, AWS PrivateLink, AWS Transit Gateway, AWS Lambda, AWS EFS, AWS FSx.

    5. Cost optimization:

    It covers areas such as cost-effective resource selection, monitoring expenditure, and identifying opportunities for cost savings.
    AWS Services: AWS EC2 Spot Instances, AWS Auto Scaling, AWS Cost Explorer, AWS Trusted Advisor, AWS Budgets, AWS Reserved Instances, AWS Savings Plans, AWS Organizations, AWS Cost and Usage Reports, AWS Lambda, AWS CloudWatch.

    6. Sustainability Pillar:

    It covers areas like carbon footprint reduction, energy efficiency, and resource optimization.
    AWS Services: AWS Graviton instances, Amazon EBS gp3 volumes, AWS Instance Scheduler, AWS Cost Explorer, AWS Trusted Advisor, AWS Compute Optimizer, AWS Ground Station, AWS Snowcone, AWS Lambda, AWS Batch, AWS Glue, AWS Athena.
    Weekend scheduled shutdown of EC2s if not in use.

    Key word to remember: OSCAR (Replace A by P)



    Good articles you should read: 
    https://dzone.com/articles/pillars-of-aws-well-architected-framework
    https://builder.aws.com/content/2eIXjpD5TI2j00UWUHx159mO9Mw/aws-well-architected-framework-comprehensive-guide
    https://www.aws.ps/aws-well-architected-framework/ 
    OSCAR (replace A by P)

    Tuesday, March 10, 2026

    Harness | Feature Flag

    1. A feature flag (also called a feature toggle) is a conditional switch in code.

    Instead of deploying new code to enable a feature, you wrap the feature with a flag:

    if (featureFlagService.isEnabled("new-checkout")) {
    showNewCheckout();
    } else {
    showOldCheckout();
    }

    The flag value is controlled remotely from Harness.



    2. Basic Flow

    Typical lifecycle:

    1. Create flag in Harness
    2. Add SDK to your application
    3. Evaluate the flag in code
    4. Control rollout from the dashboard
    5. Target users or environments

    Harness Dashboard

    Feature Flag Service

    Application SDK

    Feature Enabled / Disabled

    3. Step-by-Step Implementation

    Step 1: Create a Feature Flag

    In Harness:

    1. Go to Feature Flags module
    2. Click Create Feature Flag
    3. Choose flag type:
      • Boolean
      • String
      • Number
    4. Name it:
      new-checkout-ui

    Enable it for specific environments like:

    • dev
    • staging
    • production

    Step 2: Add Harness SDK

    Install the SDK in your app.

    Example: Java

    <dependency>
    <groupId>io.harness</groupId>
    <artifactId>ff-java-server-sdk</artifactId>
    </dependency>

    Initialize SDK:

    CfClient client = new CfClient("YOUR_API_KEY");

    Step 3: Evaluate the Flag in Code

    Create a user context:

    Target target = Target.builder()
    .identifier("user123")
    .name("Test User")
    .build();

    Check the flag:

    boolean enabled = client.boolVariation(
    "new-checkout-ui",
    target,
    false
    );

    if(enabled){
    showNewCheckout();
    }

    4. Targeting Specific Users

    Harness allows rule-based targeting.

    Examples:

    Enable feature only for:

    • specific users
    • user segments
    • percentage rollout
    • environments

    Example targeting rule:

    Email ends with @company.com → Feature ON
    Others → OFF

    5. Progressive Rollout (Most Common Use)

    Instead of enabling for everyone:

    PhaseUsers
    Phase 1Internal team
    Phase 25% users
    Phase 325% users
    Phase 4100%

    This reduces risk if something breaks.


    6. Kill Switch (Critical Use Case)

    If production breaks:

    Just disable the flag in Harness UI.

    No redeploy required.

    Feature Flag → OFF
    Application automatically disables feature

    7. Environment Control

    You can manage flags per environment:

    EnvironmentStatus
    DevON
    QAON
    ProdOFF

    This helps test features safely before release.


    8. Best Practices

    1. Use meaningful names

    Good:

    checkout_new_ui

    Bad:

    flag123

    2. Remove flags after rollout

    Once feature is stable:

    • delete flag
    • remove conditional code

    Otherwise tech debt increases.


    3. Use flags for risky features

    Great for:

    • new UI
    • payment systems
    • new algorithms
    • large backend changes


    9. Real Production Example

    Example:

    You deploy new recommendation algorithm.

    Steps:

    1. Deploy code with feature flag
    2. Enable for 5% users
    3. Monitor metrics
    4. Increase to 50%
    5. Finally 100%

    If metrics drop → disable instantly.


    In short:
    Harness feature flags allow safe releases, gradual rollouts, instant rollback, and user targeting without redeploying applications.

    Saturday, March 7, 2026

    Harness | You Should Know These Concepts

    Harness is a modern CI/CD and DevOps platform that automates software delivery using AI/ML-driven deployment verification, continuous integration, continuous delivery, and cloud cost management.

    Key features
    1. Continuous Integration (CI)
    2. Continuous Delivery (CD)
    3. Feature Flags
    4. Cloud Cost Management
    5. Security Testing Orchestration
    6. GitOps support
    7. Automated Rollback using ML verification

    Example:

    Harness is a software delivery platform that automates CI/CD pipelines and uses machine learning for automated verification and rollback of deployments. It supports multiple deployment strategies such as canary, blue-green, and rolling deployments.


    Core modules in Harness -

    Important Harness modules include:

    ModulePurpose
    CIBuild and test automation
    CDDeployment automation
    Feature FlagsRelease features gradually
    Cloud Cost ManagementOptimize cloud spending
    STOSecurity testing orchestration
    GitOpsKubernetes Git-based deployment

    Harness Pipeline -

    A pipeline in Harness is a series of automated steps used to build, test, and deploy applications.

    Components
    • Stages
    • Steps
    • Triggers
    • Approvals
    • Rollback steps

    Example flow:

    Code Commit → Build → Test → Artifact → Deploy → Verify → Rollback if needed

    Stage in Harness -

    A Stage is a logical grouping of steps within a pipeline that represents a phase of the delivery process.

    Common stages
    • Build Stage
    • Deployment Stage
    • Approval Stage
    • Security Scan Stage

    Example:

    Pipeline
    ├── Build Stage
    ├── Test Stage
    └── Deploy Stage

    Deployment strategies Harness supports -

    Harness supports several deployment strategies:

    StrategyExplanation
    RollingDeploy gradually to instances
    Blue-GreenSwitch traffic between environments
    CanaryDeploy to small subset first
    RecreateStop old version then deploy new
    ShadowMirror traffic to new version

    Example:

    In a Canary deployment, a small percentage of traffic is routed to the new version to validate its performance before a full rollout.


    Harness Continuous Verification (CV) -

    Continuous Verification uses machine learning to analyze application metrics and logs during deployment to detect anomalies.

    It integrates with tools like:
    • Prometheus
    • Datadog
    • Splunk
    • New Relic

    If anomalies are detected:

    • Harness automatically rolls back the deployment.


    Harness Delegate -

    A Delegate is a lightweight service installed in your infrastructure that performs tasks on behalf of Harness.

    In simple words, Delegate is an agent of Harness who perform all the steps required for deployment on infrastructure.

    Harness itself does not directly access your infrastructure; instead, the delegate performs those operations locally and communicates results back to Harness.



    Responsibilities
    • Execute pipeline steps
    • Communicate with infrastructure
    • Connect with artifact repositories
    • Perform deployments

    Example:
    A Delegate in Harness is a lightweight agent installed in the target infrastructure that executes pipeline tasks such as deployments, integrations, and scripts execution. It acts as a secure communication bridge between the Harness platform and the infrastructure resources.


    Connectors in Harness -

    Connectors allow Harness to connect with external systems.

    Examples
    • Git repositories (like stash)
    • Artifact registries (like nexus)
    • Cloud providers (like GCP, Azure or AWS)
    • Kubernetes clusters

    Common integrations:

    • GitHub
    • Docker Hub
    • Amazon Web Services
    • Kubernetes


    Infrastructure Definition in Harness -

    Infrastructure Definition defines where the application will be deployed.

    Examples
    • Kubernetes cluster
    • AWS EC2
    • Azure VM
    • Google Cloud

    It includes:

    • Cluster details
    • Namespace
    • Deployment environment


    GitOps in Harness -

    GitOps is a deployment method where Git repositories act as the single source of truth for infrastructure and application configuration.

    Harness GitOps integrates with:

    • Argo CD
    • Flux

    Workflow:

    Developer commit → Git repo → GitOps tool → Kubernetes cluster

    Alternate:
    You can create  trigger inside pipeline triggers using Github webhook, that will trigger your pipeline execution on code commits.


    Rollback in Harness -

    Rollback happens automatically when:

    1. Deployment verification fails
    2. Error thresholds are crossed
    3. Manual rollback is triggered

    Steps:

    1. Harness detects failure
    2. Stops current deployment
    3. Restores previous stable version


    Harness Templates -

    Templates allow reusable pipeline components.

    Example reusable templates:

    • Deployment step
    • Security scan
    • Build process

    Benefits:

    • Standardization
    • Reusability
    • Reduced configuration errors


    Harness Triggers -

    Triggers automatically start pipelines based on events.

    Common triggers:

    • Git commit
    • Pull request
    • Schedule
    • Webhook

    Example:

    Git Push → Trigger Pipeline → Build + Deploy

    Difference between Harness and Jenkins -

    FeatureHarnessJenkins
    SetupSaaS / ManagedSelf-hosted
    AI verificationYesNo
    UIModern UIBasic
    Deployment strategiesBuilt-inPlugin based
    RollbackAutomatedManual scripting

    Secrets in Harness -

    Secrets store sensitive information securely in encrypted form and decrypted when to use.
    Or you can ingrate external store like AWS Secret Manager, Secret Vault etc.

    Examples:

    • API keys
    • Passwords
    • Tokens

    Harness integrates with secret managers like:

    • HashiCorp Vault
    • AWS Secrets Manager


    Harness supports Kubernetes deployments -

    Harness supports multiple Kubernetes deployment types:

    • Kubernetes Rolling Deployment
    • Kubernetes Canary Deployment
    • Helm Chart Deployment
    • GitOps deployment

    Tools used:

    • Helm
    • Kubectl


    Advantages of using Harness -

    Advantages include:

    • AI-driven deployment verification
    • Automatic rollback
    • Reduced deployment failures
    • Built-in security scanning
    • Native cloud and Kubernetes support

    Node | process.nextclick

    process.nextTick() schedules a callback to run immediately after the current operation , before the event loop continues . It runs: Befor...