Blog

  • Progressive Jackpots: How to Win Big

    Why Progressive Jackpots: How to Win Big Matters

    Progressive jackpots represent the pinnacle of excitement in online casinos. Unlike static jackpots, which remain fixed, progressive jackpots grow with each player’s contribution. This can lead to life-changing sums, often reaching millions. For players at platforms like CryptoLeo Casino, understanding the mechanics behind these jackpots is crucial for maximizing their chances of winning big.

    The Mechanics of Progressive Jackpots

    Progressive jackpots are funded by a small percentage of each bet placed on a corresponding game. This means that as more players participate, the jackpot continues to climb. Here’s how the mechanics typically work:

    • Every bet contributes a fraction (usually between 1% to 5%) to the jackpot.
    • Jackpots can be local (limited to a single casino) or pooled across multiple casinos.
    • To qualify for the jackpot, players may need to place a maximum bet.

    Understanding these elements can significantly influence your strategy.

    The Math Behind Winning Progressive Jackpots

    The mathematics of winning a progressive jackpot can be complex. Here are some essential metrics:

    Game Title RTP (%) Minimum Bet to Qualify Current Jackpot
    Mega Moolah 88.12% $0.25 $15,000,000+
    Divine Fortune 96.59% $0.30 $400,000+
    Major Millions 89.30% $0.50 $1,000,000+

    A higher RTP (Return to Player) percentage generally indicates a better chance of winning over time. However, lower minimum bets can sometimes lead to larger potential wins.

    Strategies to Increase Your Chances

    Winning a progressive jackpot often relies on both luck and strategy. Here are some effective approaches:

    • Bet Max: Always wager the maximum amount to qualify for the jackpot.
    • Choose Wisely: Select games with a higher RTP and larger jackpots.
    • Set a Budget: Determine your spending limit to avoid chasing losses.
    • Utilize Bonuses: Take advantage of casino promotions and bonuses to extend your playtime.

    Implementing these strategies can significantly enhance your gaming experience.

    Understanding the Odds: What You Need to Know

    The odds of hitting a progressive jackpot can be daunting. For instance, the probability of winning Mega Moolah’s jackpot may be as low as 1 in 50 million, depending on your bet and timing. It’s essential to grasp these odds:

    • Progressive jackpots often have lower win rates compared to standard slots.
    • Consider the volatility of the game; high volatility games can lead to substantial wins but may take longer to hit.

    Being aware of these odds can help you manage your expectations and play more strategically.

    Hidden Risks of Progressive Jackpots

    While the allure of massive payouts is enticing, there are hidden risks that players should be aware of:

    • High Variance: Many progressive games are high variance, meaning wins can be infrequent.
    • Budgeting Challenges: The thrill of chasing big wins can lead to overspending.
    • Jackpot Saturation: Many players may be aiming for the same jackpot, increasing competition.

    Understanding these risks is vital for responsible gaming.

    Final Thoughts: The Journey of Winning Big

    Winning a progressive jackpot is as much about strategy as it is about luck. By understanding the mechanics, employing effective strategies, and being aware of the risks, you can enhance your chances of hitting it big. As you explore options at online casinos like CryptoLeo, keep in mind that the journey can be just as rewarding as the potential payout.

  • Cryptocurrency Gambling: Future of Online Casinos

    Cryptocurrency gambling is rapidly transforming the landscape of online casinos, offering unmatched benefits such as enhanced privacy, faster transactions, and lower fees. As digital currencies like Bitcoin, Ethereum, and Litecoin become mainstream, their integration into online gambling platforms signals a new era of innovation and user empowerment. For players and operators alike, understanding this shift is crucial to staying ahead in the competitive gaming industry.

    To explore this evolution comprehensively, we will delve into the benefits, challenges, practical steps for engaging with crypto gambling, and how platforms like 1red Casino Online are leading the way in this digital revolution.

    Table of Contents

    Why Cryptocurrency Gambling Matters

    The integration of cryptocurrencies into online gambling platforms is not just a trend but a fundamental shift in how players interact with casinos. With over 200 million cryptocurrency users worldwide, the potential reach of crypto gambling is vast. This expansion is driven by increased demand for privacy, transparency, and quicker transaction times.

    Cryptocurrency gambling platforms offer an average 96.5% RTP (Return to Player), matching or exceeding traditional online casinos. Additionally, the decentralized nature reduces the need for intermediaries, leading to lower operational costs and better payout rates for players.

    Platforms like 1red Casino Online are pioneering this space, providing seamless integration of crypto payments and innovative gaming options that appeal to a tech-savvy audience.

    Benefits Over Traditional Online Casinos

    Feature Cryptocurrency Casinos Traditional Online Casinos
    Transaction Speed Typically less than 24 hours Usually 1-5 days depending on banking method
    Fees Lower, often 0-2% Higher, with bank charges and processing fees
    Privacy Enhanced, with minimal personal data required Requires extensive personal and banking details
    Accessibility Global, no banking restrictions Restricted by regional banking laws
    Security High, utilizing blockchain technology Variable, depending on platform security measures

    How to Start Crypto Gambling

    1. Choose a reputable crypto casino platform, such as 1red Casino Online.
    2. Create a digital wallet compatible with popular cryptocurrencies like Bitcoin or Ethereum.
    3. Deposit funds into your wallet from an exchange or direct bank transfer.
    4. Transfer your crypto to the casino’s wallet address securely.
    5. Browse available games, check RTP rates (commonly around 96%), and set your betting limits.
    6. Play responsibly, leveraging features like deposit limits and self-exclusion options.

    By following these steps, new users can begin engaging with crypto gambling efficiently and securely, enjoying the benefits of fast transactions and transparent gameplay.

    • Bitcoin (BTC): The most widely accepted and recognized cryptocurrency, known for its stability and security.
    • Ethereum (ETH): Popular for its smart contract capabilities, enabling innovative gaming features.
    • Litecoin (LTC): Offers faster transaction times with lower fees, ideal for quick play.
    • Ripple (XRP): Known for near-instant transactions, gaining traction among high-frequency players.
    • Dogecoin (DOGE): Popular in casual gaming communities due to its meme culture and low fees.

    Challenges and Risks

    Despite the numerous advantages, crypto gambling faces several hurdles. Price volatility can impact bankroll stability, as cryptocurrencies like Bitcoin can fluctuate by over 10% within 24 hours. Regulatory uncertainty remains a significant concern, with some jurisdictions banning or restricting crypto gambling activities.

    Security breaches or hacking incidents can threaten funds if proper security measures are not implemented. Additionally, the lack of consumer protections compared to traditional banking can make users vulnerable to scams or fraud.

    Operators must navigate complex legal landscapes, and players should conduct due diligence before depositing funds into any platform.

    Experts forecast that by 2028, over 50% of online gambling transactions will be conducted using cryptocurrencies. The adoption of blockchain-based provably fair games will rise, offering transparency and trust.

    Decentralized casinos powered by smart contracts could eliminate the need for middlemen, lowering costs and increasing payout percentages. Integration with emerging technologies like virtual reality (VR) and augmented reality (AR) will enhance immersive gaming experiences.

    Regulatory frameworks will likely become more defined, providing clearer pathways for operators and safeguarding players’ interests.

    Case Study: Cryptocurrency Casino Success

    A notable example is CryptoWin Casino, which launched in 2020 and quickly gained a user base of over 100,000 players worldwide. Its RTP averages 97%, and transaction times average less than 1 hour.

    By adopting a blockchain-based system, CryptoWin reduced operational costs by 30% and increased payout speed. Their revenue soared by 150% within the first year, demonstrating the lucrative potential of crypto gambling platforms.

    Myths vs Facts

    Myth Fact
    Cryptocurrency gambling is illegal everywhere. False; legality varies by country, but many jurisdictions regulate or permit crypto gambling.
    Crypto gambling is only for tech-savvy players. False; user-friendly interfaces make it accessible for all skill levels.
    Cryptos are too volatile to use for gambling. Partially true; but many casinos offer stablecoins like USDT for price stability.
    Crypto gambling platforms are less secure. False; blockchain technology provides high security when proper measures are employed.

    Practical Steps to Join Crypto Casinos

    • Research reputable platforms with positive user reviews and proper licensing.
    • Set up a secure digital wallet with strong authentication.
    • Deposit funds through trusted exchanges, ensuring the platform accepts your chosen crypto.
    • Check for features like provably fair games and responsible gambling tools.
    • Begin with small bets to familiarize yourself with platform mechanics and withdrawal procedures.

    Always prioritize security and responsible gambling practices when engaging in crypto betting activities.

    Next Steps and Resources

    For those interested in exploring further, consider visiting 1red Casino Online to experience a leading crypto-friendly casino platform. Stay informed about new developments by following industry news, joining online forums, and participating in blockchain and gaming webinars.

    As the industry evolves, continuous learning and cautious participation will be key to harnessing the full potential of cryptocurrency gambling.

  • Why Transaction Simulation Is the Unsung Superpower of Multi‑Chain Wallets

    Whoa, this changes things. I tried a cross-chain swap yesterday and something felt off. My instinct said double-check the simulation before hitting send. Initially I thought the wallet’s routing logic would simply pick the cheapest path, but then I noticed gas estimation quirks across EVM networks that made the route suboptimal and that realization changed how I test swaps. So I started simulating every step, slow and methodical.

    Seriously? Not always obvious. Transaction simulation is the unsung hero of safe DeFi UX. It catches slippage, failing calls, and weird reverts before money moves. Initially I thought a simple RPC call would suffice to mirror a block’s behavior, but then I ran into calls that depended on blockhashes and previous tx receipts, and that broke naive approaches.

    Here’s the thing. Multi-chain wallets must simulate on every chain involved, which multiplies complexity fast. You need fee estimates, contract ABIs, nonce states, and mempool sense. On one hand simulations give confidence because you can preview state changes off-chain in a sandboxed environment, though actually there are layers where on-chain state and mempool ordering introduce variance that the simulator must approximate or else return misleading results.

    Wow, complicated stuff. Cross-chain swaps add routing, bridges, and relayers to the mix. Each leg has its own failure modes and liquidity quirks that change cost and slippage. On the other hand, smart routing with simulated dry-runs across AMMs and DEX aggregators can identify profitable routes, but it must also factor in bridge time, relayer fees, and possible front-running risks that vary widely by network conditions and user path. I realized we can’t treat cross-chain swaps as atomic unless some protocol layer ensures settlement.

    Screenshot showing a simulated transaction with estimated gas, slippage, and route breakdown

    Hmm… that’s rough. Simulating approval flows is low-hanging fruit and often ignored, oddly. If a token contract has quirky allowance behavior, swaps can fail. Actually, wait—let me rephrase that; simulation isn’t just about predicting failures, it’s about predicting how gas and state will evolve across chained calls, and you have to model reentrancy guards, delegatecall paths, and fallback functions that behave differently under different calldata. That modeling can save users from losing funds or getting stuck in pending states.

    Okay, so check this out— I started using simulation as a gating step for swaps in tests. Sometimes a simulated success still failed in production because of mempool ordering. On one hand you can build heuristics that detect likely front-run windows and prefer routes with lower attack surface, though actually these heuristics need tuning per-network and per-pair, and they can produce false negatives that give false confidence. It’s a balancing act between speed, accuracy, and usability.

    I’ll be honest. I’m biased toward client-side simulation because I trust noncustodial flows more. Client-side sim reduces data leakage and keeps secret routing logic off centralized servers. But server-side aggregated simulators can leverage broader mempool views and replay historical blocks to build probabilistic models of transaction inclusion and fee increments, which helps when users need fast decisions at scale though it introduces trust and privacy tradeoffs. So a hybrid approach often fits privacy and convenience needs.

    This part bugs me. Rabby’s design reminds me of that pragmatic balance between local control and useful cloud assists. A wallet that simulates swaps locally while fetching path data strikes compromise. Initially I thought integrated bridges would do the heavy lifting, but then I noticed that bridging introduces time lags and intermediate custodial surfaces, and that changes the security model of any multi-chain swap workflow significantly for end users. Ultimately the goal is to make swaps feel atomic even when they’re not.

    How a Practical Simulation Workflow Looks

    Start local; simulate the tx with the exact calldata, the user’s nonce, and the destination chain state snapshot, then compare alternative routes off-chain and only then consider optional server hints for broader liquidity. I’m biased, but client-side sims should be the first gate, with optional aggregated checks for routing suggestions when latency allows. A good wallet shows the simulated state changes, expected gas drawdowns, and potential failure points so users can make an informed call instead of blindly confirming a tx… or worse.

    Common Questions

    Q: Can simulation prevent MEV and front-running?

    A: It helps identify high-risk windows and suggests safer routes, though it can’t eliminate MEV entirely. Heuristics reduce exposure, but they need per-network tuning and continuous monitoring.

    Q: Should simulations run in the cloud or locally?

    A: Both have tradeoffs. Local sims protect privacy and keys, while server-assisted sims provide richer mempool context and historical data. A hybrid model often works best for wallets targeting diverse users.

    Q: Where can I try a wallet with strong simulation features?

    A: If you want a practical, multi-chain experience with advanced safety features, check out rabby wallet — it shows how pragmatic design balances local control and helpful cloud services.

  • phi3_local_rag

    Install Ollama

    ollama run phi3

    ollama pull nomic-embed-text

    pip install langchain_experimental

    indexer.py
    
    from langchain_experimental.text_splitter import SemanticChunker
    from langchain_text_splitters import RecursiveCharacterTextSplitter
    
    
    from langchain_community.document_loaders import DirectoryLoader
    from langchain_community.embeddings import OllamaEmbeddings
    from langchain_community.vectorstores import Chroma
    
    # Load documents from a directory
    loader = DirectoryLoader("./places_transcripts", glob="**/*.txt")
    
    print("dir loaded loader")
    
    documents = loader.load()
    
    print(len(documents))
    
    # # Create embeddingsclear
    embeddings = OllamaEmbeddings(model="nomic-embed-text", show_progress=True)
    
    # # Create Semantic Text Splitter
    # text_splitter = SemanticChunker(embeddings, breakpoint_threshold_type="interquartile")
    
    text_splitter = RecursiveCharacterTextSplitter(
        chunk_size=1500,
        chunk_overlap=300,
        add_start_index=True,
    )
    
    # # Split documents into chunks
    texts = text_splitter.split_documents(documents)
    
    # # Create vector store
    vectorstore = Chroma.from_documents(
        documents=texts, 
        embedding= embeddings,
        persist_directory="./db-place")
    
    print("vectorstore created")
    ollama_phi3_rag.py
    
    
    from langchain_community.embeddings import OllamaEmbeddings
    from langchain_community.vectorstores import Chroma
    from langchain_community.chat_models import ChatOllama
    
    from langchain.prompts import ChatPromptTemplate
    from langchain.schema.runnable import RunnablePassthrough
    from langchain.schema.output_parser import StrOutputParser
    
    
    # # Create embeddingsclear
    embeddings = OllamaEmbeddings(model="nomic-embed-text", show_progress=False)
    
    db = Chroma(persist_directory="./db-place",
                embedding_function=embeddings)
    
    # # Create retriever
    retriever = db.as_retriever(
        search_type="similarity",
        search_kwargs= {"k": 5}
    )
    
    # # Create Ollama language model - Gemma 2
    local_llm = 'phi3'
    
    llm = ChatOllama(model=local_llm,
                     keep_alive="3h", 
                     max_tokens=512,  
                     temperature=0)
    
    # Create prompt template
    template = """<bos><start_of_turn>user\nAnswer the question based only on the following context and extract out a meaningful answer. \
    Please write in full sentences with correct spelling and punctuation. if it makes sense use lists. \
    If the context doen't contain the answer, just respond that you are unable to find an answer. \
    
    CONTEXT: {context}
    
    QUESTION: {question}
    
    <end_of_turn>
    <start_of_turn>model\n
    ANSWER:"""
    prompt = ChatPromptTemplate.from_template(template)
    
    # Create the RAG chain using LCEL with prompt printing and streaming output
    rag_chain = (
        {"context": retriever, "question": RunnablePassthrough()}
        | prompt
        | llm
    )
    
    # Function to ask questions
    def ask_question(question):
        print("Answer:\n\n", end=" ", flush=True)
        for chunk in rag_chain.stream(question):
            print(chunk.content, end="", flush=True)
        print("\n")
    
    # Example usage
    if __name__ == "__main__":
        while True:
            user_question = input("Ask a question (or type 'quit' to exit): ")
            if user_question.lower() == 'quit':
                break
            answer = ask_question(user_question)
            # print("\nFull answer received.\n")
  • Open Weather Tool

    pip install llama-index-tools-weather
    pip install pyowm
    from llama_index.tools.weather import OpenWeatherMapToolSpec  # Adjusted for possible correct path
    from llama_index.agent.openai import OpenAIAgent
    
    # Create the tool specification with your API key
    tool_spec = OpenWeatherMapToolSpec(key="Enter your key here")
    
    # Initialize the OpenAIAgent with the tool specification
    agent = OpenAIAgent.from_tools(tool_spec.to_tool_list())
    
    # Query the agent about the weather
    response = agent.chat("What is the temperature like in London?")
    print(response)

    Adding weather to praisonai using llama_index

    pip install praisonai

    tools.py file

    from llama_index.tools.weather import OpenWeatherMapToolSpec
    from praisonai_tools import BaseTool
    from llama_index.agent.openai import OpenAIAgent
    import os
    
    class WeatherTool(BaseTool):
        name: str = "Weather Tool"
        description: str = "Get the current weather information for a specified location"
    
        def _run(self, location: str):
            # Use your API key from the environment variable
    
            api_key = os.getenv("OPENWEATHERMAP_API_KEY")
            tool_spec = OpenWeatherMapToolSpec(key=api_key)
            agent = OpenAIAgent.from_tools(tool_spec.to_tool_list())
            response = agent.chat(f"What is the temperature like in {location}?")
            return response
    praisonai --init What is the weather in Paris tomorrow

    the above command creates the agents.yaml file
    Open the agents.yaml file and add the tool eg: tool – WeatherTool

  • Wikipedia Tool

    from langchain_community.tools import WikipediaQueryRun
    from langchain_community.utilities import WikipediaAPIWrapper
    from langchain_core.pydantic_v1 import BaseModel, Field
    
    
    class WikiInputs(BaseModel):
        """Inputs to the wikipedia tool."""
    
        query: str = Field(
            description="query to look up in Wikipedia, should be 3 or less words"
        )
    
    
    api_wrapper = WikipediaAPIWrapper(api_key="Enter your key here")
    
    tool = WikipediaQueryRun(
        name="wiki-tool",
        description="look up things in wikipedia",
        args_schema=WikiInputs,
        api_wrapper=api_wrapper,
        return_direct=True,
    )
    
    print(tool.run("Who is Dwayne Johnson?"))
  • Exa Tools

    from exa_py import Exa
    from praisonai_tools import BaseTool
    
    class ExaSearch:
        name: str = "ExaSearch"
        description: str = "Perform a search using this ExaSearch tool and returns search results with url"
    
        def __init__(self, api_key: str):
            self.exa = Exa(api_key=api_key)
    
        def run(self, query: str):
            results = self.exa.search_and_contents(
                query,
                text={"include_html_tags": True, "max_characters": 1000},
            )
            return results
    
    
    class ExaSimilar:
        name: str = "ExaSimilar"
        description: str = "Search for webpages similar to a given URL using ExaSimilar tool"
    
        def __init__(self, api_key: str):
            self.exa = Exa(api_key=api_key)
    
        def run(self, url: str):
            """Search for webpages similar to a given URL.
            The url passed in should be a URL returned from `search`.
            """
            results = self.exa.find_similar(url, num_results=3)
            return results
    
    
    class ExaContents:
        name: str = "ExaContents"
        description: str = "Get the contents of a webpage using a list of urls using ExaContents tool"
    
        def __init__(self, api_key: str):
            self.exa = Exa(api_key=api_key)
    
        def run(self, ids: list):
            """Get the contents of a webpage.
            The ids must be passed in as a list, a list of ids returned from `search`.
            """
            contents = self.exa.get_contents(ids)
            contents_str = str(contents)
            split_contents = contents_str.split("URL:")
            trimmed_contents = [content[:1000] for content in split_contents]
            return "\n\n".join(trimmed_contents)
    
    # Example usage
    if __name__ == "__main__":
        api_key = "Enter your exa key"
        tool = ExaSearch(api_key=api_key)
        search_query = "latest AI News"
        search_results = tool.run(search_query)
        print("Search Results:", search_results)
    
        # Find similar webpages
        find_similar_url = "https://boomi.com/"  # Valid URL
        similar_results = tool._run_similar(find_similar_url)
        print("Similar Results:", similar_results)
    
        # Get contents using ids
        content_ids = ["tesla.com"]  # Replace with actual IDs
        contents = tool._run_get_contents(content_ids)
        print("Contents:", contents)
    
  • Crewai Apify Tool

    from crewai import Agent, Task, Crew
    import os
    from apify_client import ApifyClient
    from langchain.tools import tool
    from typing_extensions import Annotated
    
    client = ApifyClient("Enter your Apify key here")
    
    @tool("Web Scraper Tool")
    def web_scraper_tool(url: Annotated[str, "https://example.com/"]) -> Annotated[str, "Scraped content"]:
        """Web Scraper loads Start URLs in the browser and executes Page function on each page to extract data from it."""
    
        run_input = {
            "runMode": "DEVELOPMENT",
            "startUrls": [{ "url": url }],
            "linkSelector": "a[href]",
            "globs": [{ "glob": "https://example.com/*" }],
            "pseudoUrls": [],
            "excludes": [{ "glob": "/**/*.{png,jpg,jpeg,pdf}" }],
            "pageFunction": """// The function accepts a single argument: the "context" object.
            // see https://apify.com/apify/web-scraper#page-function
            async function pageFunction(context) {
                // This statement works as a breakpoint when you're trying to debug your code. Works only with Run mode: DEVELOPMENT!
                // debugger;
                const $ = context.jQuery;
                const pageTitle = $('title').first().text();
                const h1 = $('h1').first().text();
                const first_h2 = $('h2').first().text();
                const random_text_from_the_page = $('p').first().text();
                // Print some information to actor log
                context.log.info(`URL: ${context.request.url}, TITLE: ${pageTitle}`);
    
                // Manually add a new page to the queue for scraping.
                await context.enqueueRequest({ url: context.request.url });
    
                return {
                    url: context.request.url,
                    pageTitle,
                    h1,
                    first_h2,
                    random_text_from_the_page
                };
            }""",
            "proxyConfiguration": { "useApifyProxy": True },
            "initialCookies": [],
            "waitUntil": ["networkidle2"],
            "preNavigationHooks": """// We need to return array of (possibly async) functions here.
            // and "gotoOptions".
            [
                async (crawlingContext, gotoOptions) => {
                    // ...
                },
            ]
            """,
            "postNavigationHooks": """// We need to return array of (possibly async) functions here.
            // The functions accept a single argument: the "crawlingContext" object.
            [
                async (crawlingContext) => {
                    // ...
                },
            ]""",
            "breakpointLocation": "NONE",
        }
        # Run the Actor and wait for it to finish
        run = client.actor("apify/web-scraper").call(run_input=run_input)
    
        # Fetch and print Actor results from the run's dataset (if there are any)
        #print("Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
        text_data = ""
        for item in client.dataset(run["defaultDatasetId"]).iterate_items():
            text_data += str(item) + "\n"
        return text_data
    
    # Create the web scraper agent
    web_scraper_agent = Agent(
        role='Web Scraper',
        goal='Effectively scrape data from websites for your company',
        backstory='''You are an expert web scraper. Your job is to scrape all the data for
                    your company from a given website.
                    ''',
        tool=web_scraper_tool,  # Ensure tools is set correctly
        verbose=True
    )
    
    # Define the web scraper task
    web_scraper_task = Task(
        description='Scrape all the URLs on the site so your company can use it for crawling and scraping.',
        expected_output='All the content of the website listed.',
        agent=web_scraper_agent,
        output_file='data.txt'
    )
    
    # Assemble the crew
    crew = Crew(
        agents=[web_scraper_agent],
        tasks=[web_scraper_task],
        verbose=2,
    )
    
    # Execute tasks
    result = crew.kickoff()
    print(result)
    
    # Save the result to a file
    with open('results.txt', 'w') as f:
        f.write(result)
  • Exa Search Tool

    from exa_py import Exa
    from praisonai_tools import BaseTool
    
    class ExaSearchTool:
        name: str = "ExaSearchTool"
        description: str = "Perform a search using Exa and retrieve contents with specified options"
    
        def __init__(self, api_key: str):
            self.exa = Exa(api_key="Enter your key here")
    
        def _run(self, query: str):
            results = self.exa.search_and_contents(
                query,
                text={"include_html_tags": True, "max_characters": 1000},
            )
            return results
    # Example usage
    if __name__ == "__main__":
        api_key = "ExaSearchTool"
        tool = ExaSearchTool(api_key=api_key)
        search_query = "recent midjourney news"
        results = tool._run(search_query)
        print(results)
  • AutoGen Scraping

    ! pip install -qqq pyautogen apify-client
    
    import os
    import openai
    
    config_list = [
        {"model": "gpt-3.5-turbo", "api_key": "Enter your api key"},
    ]
    
    from apify_client import ApifyClient
    from typing_extensions import Annotated
    
    
    def scrape_page(url: Annotated[str, "https://example.com/"]) -> Annotated[str, "Scraped content"]:
        # Initialize the ApifyClient with your API token
        client = ApifyClient(token="Enter your apify key")
    
        # Prepare the Actor input
        run_input = {
            "startUrls": [{"url": url}],
            "useSitemaps": False,
            "crawlerType": "playwright:firefox",
            "includeUrlGlobs": [],
            "excludeUrlGlobs": [],
            "ignoreCanonicalUrl": False,
            "maxCrawlDepth": 0,
            "maxCrawlPages": 4,
            "initialConcurrency": 0,
            "maxConcurrency": 200,
            "initialCookies": [],
            "proxyConfiguration": {"useApifyProxy": True},
            "maxSessionRotations": 10,
            "maxRequestRetries": 5,
            "requestTimeoutSecs": 60,
            "dynamicContentWaitSecs": 10,
            "maxScrollHeightPixels": 5000,
            "removeElementsCssSelector": """nav, footer, script, style, noscript, svg,
        [role=\"alert\"],
        [role=\"banner\"],
        [role=\"dialog\"],
        [role=\"alertdialog\"],
        [role=\"region\"][aria-label*=\"skip\" i],
        [aria-modal=\"true\"]""",
            "removeCookieWarnings": True,
            "clickElementsCssSelector": '[aria-expanded="false"]',
            "htmlTransformer": "readableText",
            "readableTextCharThreshold": 100,
            "aggressivePrune": False,
            "debugMode": True,
            "debugLog": True,
            "saveHtml": True,
            "saveMarkdown": True,
            "saveFiles": False,
            "saveScreenshots": False,
            "maxResults": 9999999,
            "clientSideMinChangePercentage": 15,
            "renderingTypeDetectionPercentage": 10,
        }
    
        # Run the Actor and wait for it to finish
        run = client.actor("aYG0l9s7dbB7j3gbS").call(run_input=run_input)
    
        # Fetch and print Actor results from the run's dataset (if there are any)
        text_data = ""
        for item in client.dataset(run["defaultDatasetId"]).iterate_items():
            text_data += item.get("text", "") + "\n"
    
        average_token = 0.75
        max_tokens = 20000  # slightly less than max to be safe 32k
        text_data = text_data[: int(average_token * max_tokens)]
        return text_data
    
    from autogen import ConversableAgent, register_function
    
    # Create web scrapper agent.
    scraper_agent = ConversableAgent(
        "WebScraper",
        llm_config={"config_list": config_list},
        system_message="You are a web scrapper and you can scrape any web page using the tools provided. "
        "Returns 'TERMINATE' when the scraping is done.",
    )
    
    # Create user proxy agent.
    user_proxy_agent = ConversableAgent(
        "UserProxy",
        llm_config=False,  # No LLM for this agent.
        human_input_mode="NEVER",
        code_execution_config=False,  # No code execution for this agent.
        is_termination_msg=lambda x: x.get("content", "") is not None and "terminate" in x["content"].lower(),
        default_auto_reply="Please continue if not finished, otherwise return 'TERMINATE'.",
    )
    
    # Register the function with the agents.
    register_function(
        scrape_page,
        caller=scraper_agent,
        executor=user_proxy_agent,
        name="scrape_page",
        description="Scrape a web page and return the content.",
    )
    
    chat_result = user_proxy_agent.initiate_chat(
        scraper_agent,
        message="Can you scrape https://example.com/ for me?",
        summary_method="reflection_with_llm",
        summary_args={
            "summary_prompt": """Summarize the scraped content and format summary EXACTLY as follows:
    ---
    *Website*:
    `https://example.com/`
    ---
    *content*:
    `[CONTENT GOES HERE]`
    ---
    """
        },
    )
    
    print(chat_result)