<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[pxng0lin unchained]]></title><description><![CDATA[pxng0lin unchained]]></description><link>https://unchained.pxng0lin.xyz</link><generator>RSS for Node</generator><lastBuildDate>Mon, 20 Apr 2026 10:32:27 GMT</lastBuildDate><atom:link href="https://unchained.pxng0lin.xyz/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[/6  Robustness & Diagram Validation – Polishing the Final Version (Solidsight v11)]]></title><description><![CDATA[The Need for Rigorous Validation
As Solidsight matured, accuracy became paramount. One key improvement was ensuring generated Mermaid diagrams were accurate and meaningful, addressing the occasional production of invalid placeholder diagrams by the L...]]></description><link>https://unchained.pxng0lin.xyz/6-robustness-and-diagram-validation-polishing-the-final-version-solidsight-v11</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/6-robustness-and-diagram-validation-polishing-the-final-version-solidsight-v11</guid><category><![CDATA[DeepCurrent]]></category><category><![CDATA[Solidsight]]></category><category><![CDATA[Web3]]></category><category><![CDATA[NotADev]]></category><category><![CDATA[AI]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Fri, 02 May 2025 05:00:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1743094609419/34c0f1e6-bb12-4735-a821-a54f94249cff.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-the-need-for-rigorous-validation"><strong>The Need for Rigorous Validation</strong></h3>
<p>As Solidsight matured, accuracy became paramount. One key improvement was ensuring generated Mermaid diagrams were accurate and meaningful, addressing the occasional production of invalid placeholder diagrams by the LLM.</p>
<hr />
<h3 id="heading-validation-and-regeneration-logic"><strong>Validation and Regeneration Logic</strong></h3>
<p>I developed a function to validate generated diagrams:</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">is_valid_mermaid</span>(<span class="hljs-params">diagram</span>):</span>
    <span class="hljs-keyword">return</span> (diagram.strip().startswith((<span class="hljs-string">"flowchart TD"</span>, <span class="hljs-string">"sequenceDiagram"</span>))
            <span class="hljs-keyword">and</span> <span class="hljs-string">"Default Diagram"</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">in</span> diagram)
</code></pre>
<p>When invalid diagrams were detected, the tool would prompt users to regenerate them using the original contract code.</p>
<h3 id="heading-persistent-challenges-solved"><strong>Persistent Challenges Solved</strong></h3>
<p>Previously, invalid diagrams went unnoticed, causing confusion for users. With this validation step, such diagrams were caught and addressed immediately, ensuring consistently high-quality outputs.</p>
<h3 id="heading-final-reflections"><strong>Final Reflections</strong></h3>
<p>Implementing robust validation and regeneration was perhaps one of the most impactful improvements. It highlighted the critical importance of quality assurance processes, ultimately turning Solidsight into a dependable, polished tool suitable for serious contract analysis.</p>
<hr />
<h2 id="heading-where-can-you-find-it">Where can you find it?</h2>
<p>I took the to Github as a fellow SR on X showed some interest in using it, and the app made is now available for all to use and adapt to their liking. The overall code isn’t complex, I used AI and made adjustments where needed (it speeds up the process) - Manually creating is fun, but, my time is spent in codebases looking for bugs, I try not to spend too much time building unless its completely necessary.</p>
<p>I also changed the name to DeepCurrent, I like Solidsight, but, as a final version 1, this was the choice.</p>
<p>Have fun &amp; happy hunting: <a target="_blank" href="https://github.com/pxng0lin/DeepCurrent">App: DeepCurrent</a></p>
]]></content:encoded></item><item><title><![CDATA[/5 Managing Sessions & Historical Analyses (Solidsight v9 & v10)]]></title><description><![CDATA[Rationale for Session Management
As Solidsight's capabilities expanded, managing historical analysis data became increasingly important. The next logical step was introducing structured session management, allowing users to revisit previous analyses ...]]></description><link>https://unchained.pxng0lin.xyz/5-managing-sessions-and-historical-analyses-solidsight-v9-and-v10</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/5-managing-sessions-and-historical-analyses-solidsight-v9-and-v10</guid><category><![CDATA[Web3]]></category><category><![CDATA[AI]]></category><category><![CDATA[Python]]></category><category><![CDATA[NotADev]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Fri, 25 Apr 2025 05:00:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1743094234460/dcb7953f-5c2b-4c8c-990f-1a0ec9cb5da0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-rationale-for-session-management"><strong>Rationale for Session Management</strong></h3>
<p>As Solidsight's capabilities expanded, managing historical analysis data became increasingly important. The next logical step was introducing structured session management, allowing users to revisit previous analyses easily.</p>
<h3 id="heading-structured-session-browsing"><strong>Structured Session Browsing</strong></h3>
<p>The session management system organised analyses neatly by timestamped directories. Users could browse and revisit any historical data effortlessly:</p>
<pre><code class="lang-python">pythonCopyEditdef browse_sessions():
    sessions = [d <span class="hljs-keyword">for</span> d <span class="hljs-keyword">in</span> os.listdir(os.getcwd()) <span class="hljs-keyword">if</span> d.startswith(<span class="hljs-string">"analysis_"</span>)]
    <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> sessions:
        console.print(<span class="hljs-string">"[bold red]No analysis sessions found.[/bold red]"</span>)
        <span class="hljs-keyword">return</span>
    table = Table(title=<span class="hljs-string">"Analysis Sessions"</span>)
    <span class="hljs-keyword">for</span> idx, session <span class="hljs-keyword">in</span> enumerate(sessions, start=<span class="hljs-number">1</span>):
        table.add_row(<span class="hljs-string">f"<span class="hljs-subst">{idx}</span>"</span>, session)
    console.print(table)
</code></pre>
<h3 id="heading-navigating-complexities"><strong>Navigating Complexities</strong></h3>
<p>Managing file paths, ensuring consistency in data storage, and handling incomplete analysis sessions were initial obstacles. The early implementations faced occasional crashes due to mislabelled files or corrupted data.</p>
<h3 id="heading-insights-gained"><strong>Insights Gained</strong></h3>
<p>By addressing these challenges, I learned the significance of comprehensive file validation and clear storage architecture. Effective session management ultimately allowed users to benefit more fully from historical analyses, making Solidsight highly practical.</p>
<p>We will stop here, and I’ll see you on the next one.</p>
<p>pxng0lin.</p>
]]></content:encoded></item><item><title><![CDATA[/4 Adding Flexibility – Supporting Multiple LLM Models (Solidsight v8)]]></title><description><![CDATA[Expanding the Tool's Capabilities
Smart contract analyses can vary greatly in complexity and purpose. Realising this, I added support for multiple LLM models, allowing users to tailor analyses according to specific needs. This flexibility transformed...]]></description><link>https://unchained.pxng0lin.xyz/4-adding-flexibility-supporting-multiple-llm-models-solidsight-v8</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/4-adding-flexibility-supporting-multiple-llm-models-solidsight-v8</guid><category><![CDATA[Web3]]></category><category><![CDATA[AI]]></category><category><![CDATA[Python]]></category><category><![CDATA[NotADev]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Fri, 11 Apr 2025 05:00:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1743093605753/1c484fe6-fa37-4bfd-a4cb-d0b732d1e087.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-expanding-the-tools-capabilities"><strong>Expanding the Tool's Capabilities</strong></h3>
<p>Smart contract analyses can vary greatly in complexity and purpose. Realising this, I added support for multiple LLM models, allowing users to tailor analyses according to specific needs. This flexibility transformed Solidsight into a much more versatile tool.</p>
<hr />
<h3 id="heading-multi-model-support"><strong>Multi-model Support</strong></h3>
<p>I introduced a simple yet effective model-selection prompt using Rich:</p>
<pre><code class="lang-python">pythonCopyEditanalysis_model = Prompt.ask(
    <span class="hljs-string">"Select your analysis model"</span>, 
    choices=[<span class="hljs-string">"deepseek-r1"</span>, <span class="hljs-string">"qwen2.5-coder:3b"</span>, <span class="hljs-string">"gemma3:4b"</span>], 
    default=<span class="hljs-string">"deepseek-r1"</span>
)
</code></pre>
<h3 id="heading-challenges-in-implementation"><strong>Challenges in Implementation</strong></h3>
<p>Different models required slight prompt adjustments due to their varied interpretations and output formats. It took careful prompt engineering and extensive testing to achieve consistency across models.</p>
<hr />
<h3 id="heading-lessons-learned"><strong>Lessons Learned</strong></h3>
<p>The value of modular design became clear—flexibility allowed Solidsight to adapt easily to varying analysis requirements.</p>
<p>We will stop here, and I’ll see you on the next one.</p>
<p>pxng0lin.</p>
]]></content:encoded></item><item><title><![CDATA[SFA: /3 Optimising Workflow and Enhancing Robustness (v4 & v5)]]></title><description><![CDATA[Overview
Versions 4 and 5 saw significant strides in workflow optimisation, making the SFA more practical and user-friendly. Efficiency and redundancy elimination became central to these iterations.
Duplicates
A key feature introduced was a robust du...]]></description><link>https://unchained.pxng0lin.xyz/sfa-3-optimising-workflow-and-enhancing-robustness-v4-and-v5</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/sfa-3-optimising-workflow-and-enhancing-robustness-v4-and-v5</guid><category><![CDATA[Web3]]></category><category><![CDATA[Python]]></category><category><![CDATA[AI]]></category><category><![CDATA[NotADev]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Mon, 07 Apr 2025 07:00:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1743208612154/c24907a1-dda9-4610-aa65-ff8025a19c22.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-overview">Overview</h3>
<p>Versions 4 and 5 saw significant strides in workflow optimisation, making the SFA more practical and user-friendly. Efficiency and redundancy elimination became central to these iterations.</p>
<h3 id="heading-duplicates">Duplicates</h3>
<p>A key feature introduced was a robust duplicate-checking mechanism, significantly cutting down on redundant processing:</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">report_already_exists</span>(<span class="hljs-params">report_id: str, db_path=<span class="hljs-string">"vectorisation.db"</span></span>) -&gt; bool:</span>
    conn = sqlite3.connect(db_path)
    c = conn.cursor()
    c.execute(<span class="hljs-string">"SELECT 1 FROM reports WHERE id=?"</span>, (report_id,))
    exists = c.fetchone()
    conn.close()
    <span class="hljs-keyword">return</span> bool(exists)
</code></pre>
<h3 id="heading-a-little-extra">A Little Extra</h3>
<p>I also implemented substantial interface improvements, such as clearer feedback with progress bars and advanced logging systems, allowing easier debugging and providing greater transparency during processing tasks.</p>
<p>These improvements significantly enhanced the user experience, making the system easier and more intuitive to operate.</p>
<p>A short read, but progress! See you soon.</p>
<p>pxng0lin</p>
]]></content:encoded></item><item><title><![CDATA[/3 Improving Readability & Engagement – The Rich Library Integration (Solidsight v5, v6, & v7)]]></title><description><![CDATA[Motivation for Change
Although Solidsight was becoming powerful, its plain text outputs lacked readability. To address this, I integrated the Rich library, significantly enhancing the visual appeal and clarity of the command-line interface.

Implemen...]]></description><link>https://unchained.pxng0lin.xyz/3-improving-readability-and-engagement-the-rich-library-integration-solidsight-v5-v6-and-v7</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/3-improving-readability-and-engagement-the-rich-library-integration-solidsight-v5-v6-and-v7</guid><category><![CDATA[Web3]]></category><category><![CDATA[AI]]></category><category><![CDATA[Python]]></category><category><![CDATA[NotADev]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Fri, 04 Apr 2025 05:00:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1743091967193/e6b81dea-3ce0-42cb-a363-ec3640b13b6b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-motivation-for-change"><strong>Motivation for Change</strong></h3>
<p>Although Solidsight was becoming powerful, its plain text outputs lacked readability. To address this, I integrated the Rich library, significantly enhancing the visual appeal and clarity of the command-line interface.</p>
<hr />
<h3 id="heading-implementing-rich"><strong>Implementing Rich</strong></h3>
<p>Rich transformed simple outputs into coloured, visually structured messages and tables:</p>
<pre><code class="lang-python">pythonCopyEditfrom rich.console <span class="hljs-keyword">import</span> Console
console = Console()

console.print(<span class="hljs-string">"[bold green]Analysis Complete![/bold green]"</span>)
</code></pre>
<h3 id="heading-errors-and-compatibility-issues"><strong>Errors and Compatibility Issues</strong></h3>
<p>Initially, Rich caused display anomalies in certain terminals. Compatibility across different terminal emulators required additional tweaking.</p>
<hr />
<h3 id="heading-reflections"><strong>Reflections</strong></h3>
<p>Adding Rich significantly improved user interaction, facilitating clearer communication of results and easier debugging during development.</p>
<p>We will stop here, and I’ll see you on the next one.</p>
<p>pxng0lin.</p>
]]></content:encoded></item><item><title><![CDATA[SFA: /2 Enhancing Stability and Data Management (v2 & v3)]]></title><description><![CDATA[Overview
Following the initial success, versions 2 and 3 of the SFA were dedicated to improving stability, refining database functionality, and expanding data handling capabilities. This phase was crucial in ensuring that the system was robust enough...]]></description><link>https://unchained.pxng0lin.xyz/sfa-2-enhancing-stability-and-data-management-v2-and-v3</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/sfa-2-enhancing-stability-and-data-management-v2-and-v3</guid><category><![CDATA[Web3]]></category><category><![CDATA[Python]]></category><category><![CDATA[AI]]></category><category><![CDATA[NotADev]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Mon, 31 Mar 2025 07:00:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1743208169006/9b0c0cb5-fe02-4ebc-8158-0d588983f13d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-overview">Overview</h3>
<p>Following the initial success, versions 2 and 3 of the SFA were dedicated to improving stability, refining database functionality, and expanding data handling capabilities. This phase was crucial in ensuring that the system was robust enough to handle increasing complexity.</p>
<h3 id="heading-improving-usability">Improving Usability</h3>
<p>Significant improvements involved enhancing the database to store structured JSON data for metadata, vector embeddings, and detailed AI-generated analysis summaries. This allowed queries to become significantly more flexible and efficient, enhancing overall usability.</p>
<p>Additionally, refining how Markdown content was split into sections greatly improved the precision of the analysis. The upgraded section-splitting function allowed the SFA to accurately handle diverse document structures, significantly improving the effectiveness of embedding and subsequent analysis:</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">split_into_sections</span>(<span class="hljs-params">text: str</span>) -&gt; list:</span>
    sections = re.split(<span class="hljs-string">r'\n(?=#)'</span>, text)
    results = []
    <span class="hljs-keyword">for</span> sec <span class="hljs-keyword">in</span> sections:
        sec = sec.strip()
        <span class="hljs-keyword">if</span> sec.startswith(<span class="hljs-string">"#"</span>):
            lines = sec.splitlines()
            heading = lines[<span class="hljs-number">0</span>].lstrip(<span class="hljs-string">'#'</span>).strip()
            content = <span class="hljs-string">"\n"</span>.join(lines[<span class="hljs-number">1</span>:]).strip()
            results.append({<span class="hljs-string">"title"</span>: heading, <span class="hljs-string">"content"</span>: content})
        <span class="hljs-keyword">else</span>:
            <span class="hljs-keyword">if</span> sec:
                results.append({<span class="hljs-string">"title"</span>: <span class="hljs-string">"Introduction"</span>, <span class="hljs-string">"content"</span>: sec})
    <span class="hljs-keyword">return</span> results
</code></pre>
<h3 id="heading-learnings">Learnings</h3>
<p>Versions 2 and 3 highlighted critical areas for further development, particularly regarding duplicate detection and more robust error management.</p>
<p>See you in the next one</p>
<p>pxng0lin</p>
]]></content:encoded></item><item><title><![CDATA[/2 Enhancing Interaction – Introducing Menus & Workflow Refinement (Solidsight v3 & v4)]]></title><description><![CDATA[Improving User Experience
I realised early on that functionality alone wasn't enough; Solidsight needed an intuitive way for users to interact with generated reports. This led to the development of interactive command-line menus, providing users with...]]></description><link>https://unchained.pxng0lin.xyz/2-enhancing-interaction-introducing-menus-and-workflow-refinement-solidsight-v3-and-v4</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/2-enhancing-interaction-introducing-menus-and-workflow-refinement-solidsight-v3-and-v4</guid><category><![CDATA[Web3]]></category><category><![CDATA[NotADev]]></category><category><![CDATA[AI]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Fri, 28 Mar 2025 06:00:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1743090619163/cad5eda4-976e-4f1e-b0d2-e8c1a370444a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-improving-user-experience"><strong>Improving User Experience</strong></h3>
<p>I realised early on that functionality alone wasn't enough; Solidsight needed an intuitive way for users to interact with generated reports. This led to the development of interactive command-line menus, providing users with easy navigation options and clear choices.</p>
<hr />
<h3 id="heading-interactive-menu-implementation"><strong>Interactive Menu Implementation</strong></h3>
<p>The new menu allowed users to select specific analyses or regenerate outputs as needed:</p>
<pre><code class="lang-python">pythonCopyEditdef main_menu(output_dir):
    <span class="hljs-keyword">while</span> <span class="hljs-literal">True</span>:
        print(<span class="hljs-string">"\n--- Main Menu ---"</span>)
        reports = [f <span class="hljs-keyword">for</span> f <span class="hljs-keyword">in</span> os.listdir(output_dir) <span class="hljs-keyword">if</span> f.endswith(<span class="hljs-string">"_analysis_report.md"</span>)]
        <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> reports:
            print(<span class="hljs-string">"No reports found in this directory."</span>)
            <span class="hljs-keyword">break</span>
        <span class="hljs-keyword">for</span> idx, report <span class="hljs-keyword">in</span> enumerate(reports, start=<span class="hljs-number">1</span>):
            print(<span class="hljs-string">f"<span class="hljs-subst">{idx}</span>. <span class="hljs-subst">{report.replace(<span class="hljs-string">'_analysis_report.md'</span>, <span class="hljs-string">''</span>)}</span>"</span>)
        choice = input(<span class="hljs-string">"Select a report by number or type 'exit': "</span>)
        <span class="hljs-keyword">if</span> choice.lower() == <span class="hljs-string">'exit'</span>:
            <span class="hljs-keyword">break</span>
        <span class="hljs-comment"># additional logic here</span>
</code></pre>
<hr />
<h3 id="heading-navigating-initial-errors"><strong>Navigating Initial Errors</strong></h3>
<p>Initially, the menu would crash if no reports existed or when invalid inputs were entered. It became clear that thorough input validation and better user prompts were essential.</p>
<h3 id="heading-lessons-learned"><strong>Lessons Learned</strong></h3>
<p>Addressing these user-experience issues taught me to anticipate user behaviours better and ensure the program remained stable, irrespective of how it was used.</p>
<p>We will stop here, and I’ll see you on the next one.</p>
<p>pxng0lin.</p>
]]></content:encoded></item><item><title><![CDATA[/1 The Genesis – Setting the Stage – Creating the Initial Smart Contract Analyser (Solidsight v1 & v2)]]></title><description><![CDATA[Overview
When I started building Solidsight, the idea was straightforward yet ambitious: automate the process of reviewing Solidity smart contracts by leveraging local Large Language Models (LLMs). I envisioned an app that could quickly dissect contr...]]></description><link>https://unchained.pxng0lin.xyz/1-the-genesis-building-a-smart-contract-analyser-from-scratch</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/1-the-genesis-building-a-smart-contract-analyser-from-scratch</guid><category><![CDATA[Web3]]></category><category><![CDATA[Python]]></category><category><![CDATA[AI]]></category><category><![CDATA[NotADev]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Fri, 21 Mar 2025 00:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1743090567128/d941a9da-7bbe-435f-b82b-36c56de5711f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-overview"><strong>Overview</strong></h3>
<p>When I started building <strong>Solidsight</strong>, the idea was straightforward yet ambitious: automate the process of reviewing Solidity smart contracts by leveraging local Large Language Models (LLMs). I envisioned an app that could quickly dissect contracts and produce clear, informative reports detailing their functionality, potential vulnerabilities, and user interactions.</p>
<hr />
<h3 id="heading-building-the-core-functionality"><strong>Building the Core Functionality</strong></h3>
<p>The initial goal was simple: parse Solidity files, generate a detailed functions report, describe the typical user journey, and visualise the function interactions using Mermaid diagrams.</p>
<p>Here’s the basic API interaction function I started with:</p>
<pre><code class="lang-python">pythonCopyEditdef call_llm(prompt):
    headers = {<span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"application/json"</span>}
    payload = {
        <span class="hljs-string">"model"</span>: <span class="hljs-string">"deepseek-r1"</span>,
        <span class="hljs-string">"prompt"</span>: prompt,
        <span class="hljs-string">"temperature"</span>: <span class="hljs-number">0.7</span>,
        <span class="hljs-string">"max_tokens"</span>: <span class="hljs-number">8000</span>
    }
    <span class="hljs-keyword">try</span>:
        response = requests.post(LLM_API_URL, json=payload, headers=headers)
        response.raise_for_status()
        <span class="hljs-keyword">return</span> response.json()[<span class="hljs-string">"choices"</span>][<span class="hljs-number">0</span>][<span class="hljs-string">"text"</span>].strip()
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        print(<span class="hljs-string">f"LLM API call failed: <span class="hljs-subst">{e}</span>"</span>)
        <span class="hljs-keyword">return</span> <span class="hljs-string">""</span>
</code></pre>
<hr />
<h3 id="heading-challenges-encountered"><strong>Challenges Encountered</strong></h3>
<p>The early versions struggled significantly with handling large Solidity files, often hitting token limits or experiencing API timeouts. Error handling at this stage was minimal, causing unexpected failures that required restarting the entire analysis—a frustrating user experience.</p>
<h3 id="heading-reflections-and-improvements"><strong>Reflections and Improvements</strong></h3>
<p>From these initial setbacks, I learned how crucial robust error handling was. Implementing structured error catching and refining my prompts reduced crashes and improved reliability, paving the way for more ambitious developments.</p>
<p>We will stop here, and I’ll see you on the next one.</p>
<p>pxng0lin.</p>
]]></content:encoded></item><item><title><![CDATA[SFA: /1 - Conceptualising SFA – Building the Foundation (Version 1)]]></title><description><![CDATA[When I first sat down to create the Single File AI-Agent (SFA), my goal was ambitious yet straightforward: to streamline the process of security analysis for smart contracts by using a fully localised AI-driven system. The inspiration for this projec...]]></description><link>https://unchained.pxng0lin.xyz/1-the-birth-of-sfa</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/1-the-birth-of-sfa</guid><category><![CDATA[NotADev]]></category><category><![CDATA[SFA]]></category><category><![CDATA[Web3]]></category><category><![CDATA[AI]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Mon, 17 Mar 2025 05:00:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1741900367008/fbbbc8ea-bfb5-4914-8e4e-e3b07e74d23c.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When I first sat down to create the <strong>Single File AI-Agent (SFA)</strong>, my goal was ambitious yet straightforward: to streamline the process of security analysis for smart contracts by using a fully localised AI-driven system. The inspiration for this project came directly from a YouTube video by "Single File AI Agents", which explored the idea of creating robust, modular AI solutions without external dependencies. This video sparked my interest and shaped my vision from the start.</p>
<h2 id="heading-why-sfa"><strong>Why SFA?</strong></h2>
<p>The tedious, repetitive nature of security auditing, especially in the blockchain space, convinced me there must be a better way. Traditionally, this process involved manual reading of lengthy Markdown audit reports, cross-referencing vulnerability databases, and performing in-depth smart contract audits—all of which were time-consuming and error-prone.</p>
<p>I aimed for a system that could:</p>
<ul>
<li><p>Automatically download and process Markdown-formatted security audit reports from GitHub.</p>
</li>
<li><p>Compute semantic embeddings for quick and effective information retrieval.</p>
</li>
<li><p>Store structured data locally in a SQLite database to ensure privacy and autonomy.</p>
</li>
<li><p>Utilise a local language model (via Ollama) to summarise vulnerabilities and suggest mitigations in real-time.</p>
</li>
</ul>
<h2 id="heading-building-a-strong-base"><strong>Building a Strong Base</strong></h2>
<p>I began by setting up a local SQLite database with tables dedicated to reports and code audits:</p>
<pre><code class="lang-python">pythonCopyEditdef init_db(db_path=<span class="hljs-string">"vectorisation.db"</span>):
    conn = sqlite3.connect(db_path)
    c = conn.cursor()
    c.execute(<span class="hljs-string">'''
        CREATE TABLE IF NOT EXISTS reports (
            id TEXT PRIMARY KEY,
            source TEXT,
            content TEXT,
            overall_embedding TEXT,
            section_embeddings TEXT,
            analysis_summary TEXT,
            metadata TEXT
        )
    '''</span>)
    conn.commit()
    conn.close()
</code></pre>
<p>This database ensured data integrity and made future data retrieval straightforward.</p>
<h2 id="heading-early-challenges-and-solutions"><strong>Early Challenges and Solutions</strong></h2>
<p>One initial challenge was handling GitHub Markdown URLs. Many links pointed to files that needed conversion from "blob" URLs to "raw" URLs for direct downloading. To handle this, I created a helpe<a target="_blank" href="https://youtube.com/watch?v=YAIJV48QlXc&amp;si=LEMvrGA2WfLVmOih">r</a> function:</p>
<pre><code class="lang-python">pythonCopyEditdef convert_to_raw_url(url: str) -&gt; str:
    <span class="hljs-keyword">if</span> <span class="hljs-string">"github.com"</span> <span class="hljs-keyword">in</span> url <span class="hljs-keyword">and</span> <span class="hljs-string">"/blob/"</span> <span class="hljs-keyword">in</span> url:
        <span class="hljs-keyword">return</span> url.replace(<span class="hljs-string">"github.com"</span>, <span class="hljs-string">"raw.githubusercontent.com"</span>).replace(<span class="hljs-string">"/blob/"</span>, <span class="hljs-string">"/"</span>)
    <span class="hljs-keyword">return</span> url
</code></pre>
<h2 id="heading-initial-results"><strong>Initial Results</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741900199101/74da3ac1-d2a4-417f-a8d5-915d6f74f5e8.png" alt="output of vectorised reports on 'precision loss'" class="image--center mx-auto" /></p>
<p>Although version 1 lacked sophisticated error handling, it successfully demonstrated that the system could autonomously download Markdown files, compute embeddings using the <code>sentence-transformers</code> library, and store structured data locally. While rudimentary, the initial results were promising, affirming that the core concept of a fully local, AI-driven tool was viable.</p>
<p>So, that makes a start, see you in the next one.</p>
<p>pxng0lin.</p>
]]></content:encoded></item><item><title><![CDATA[print(result) "Part 6 of NotADev"]]></title><description><![CDATA[Aligning the Trading Script with the Enhanced Model
In the last article, I talked about how I collaborated with ChatGPT to introduce new features to my trading model, such as Recursive Feature Elimination (RFE), XGBoost, and hyperparameter tuning. Wi...]]></description><link>https://unchained.pxng0lin.xyz/result-part-6-of-notadev</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/result-part-6-of-notadev</guid><category><![CDATA[AI]]></category><category><![CDATA[Developer]]></category><category><![CDATA[Python]]></category><category><![CDATA[tradingbot]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Thu, 14 Nov 2024 06:00:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730992655378/243aa0d0-724a-464e-8baf-864e76f6f368.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-aligning-the-trading-script-with-the-enhanced-model"><strong>Aligning the Trading Script with the Enhanced Model</strong></h1>
<p>In the last article, I talked about how I collaborated with ChatGPT to introduce new features to my trading model, such as Recursive Feature Elimination (RFE), XGBoost, and hyperparameter tuning. With these enhancements, the next challenge was ensuring that my trading script could properly make use of the newly trained model. In this article, I'll discuss how I worked with ChatGPT to overcome the challenges of integrating the updated model into the trading logic.</p>
<h3 id="heading-adapting-the-trading-script">Adapting the Trading Script</h3>
<p>With the upgraded model featuring 17 new indicators, including the Average Directional Index (ADX), Momentum (MOM), and Rate of Change (ROC), it became apparent that my existing trading script needed significant changes to handle the expanded feature set. This was not just a simple update; it required a comprehensive alignment between the feature engineering used during model training and the real-time data preparation for trading decisions.</p>
<p>One of the most crucial aspects was ensuring consistency in feature names and their order. I noticed that, when the trading script tried to make predictions using real-time market data, it frequently ran into errors because the features did not match those expected by the model. The model would raise an error due to either missing features or incorrect ordering, resulting in frustrating setbacks.</p>
<p>After explaining these challenges to ChatGPT, it proposed a solution to ensure feature alignment throughout the scripts. Here's a snippet showing how I extracted and prepared the features for real-time predictions:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Extracting features in the same order as during training</span>
features_df = pd.DataFrame([features], columns=FEATURE_COLUMNS)
features_df = features_df.fillna(<span class="hljs-number">0</span>)  <span class="hljs-comment"># Handling any missing values</span>

<span class="hljs-comment"># Making prediction</span>
prediction_proba = model.predict_proba(features_df)
buy_proba = prediction_proba[<span class="hljs-number">0</span>][<span class="hljs-number">1</span>]
sell_proba = prediction_proba[<span class="hljs-number">0</span>][<span class="hljs-number">0</span>]
</code></pre>
<h3 id="heading-handling-feature-order-and-consistency">Handling Feature Order and Consistency</h3>
<p>To maintain consistency, I had to revisit the feature engineering steps I used during model training and replicate them exactly in the trading script. This meant incorporating calculations for all 17 features, such as the Exponential Moving Average (EMA), Bollinger Bands, and others, ensuring the order was identical.</p>
<p>Below is a snippet that shows the feature calculation in real-time within the trading script, which mirrors what was done during training:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Calculate features for real-time data</span>
latest_data[<span class="hljs-string">'ema50'</span>] = talib.EMA(latest_data[<span class="hljs-string">'close'</span>], timeperiod=<span class="hljs-number">50</span>)
latest_data[<span class="hljs-string">'ema200'</span>] = talib.EMA(latest_data[<span class="hljs-string">'close'</span>], timeperiod=<span class="hljs-number">200</span>)
latest_data[<span class="hljs-string">'adx'</span>] = talib.ADX(latest_data[<span class="hljs-string">'high'</span>], latest_data[<span class="hljs-string">'low'</span>], latest_data[<span class="hljs-string">'close'</span>], timeperiod=<span class="hljs-number">14</span>)
latest_data[<span class="hljs-string">'mom'</span>] = talib.MOM(latest_data[<span class="hljs-string">'close'</span>], timeperiod=<span class="hljs-number">10</span>)
<span class="hljs-comment"># ... calculate other features as well</span>
</code></pre>
<p>Once I had ensured the features were calculated consistently, ChatGPT suggested a way to handle any potential discrepancies by checking for missing or unordered features and then adjusting accordingly. This helped prevent runtime errors and allowed the model to predict as expected.</p>
<h3 id="heading-managing-recent-market-activity">Managing Recent Market Activity</h3>
<p>Another improvement ChatGPT helped me make was incorporating recent market activity to improve the accuracy of predictions. The idea was to add a "run rate" feature that considered recent volume trends and volatility. This helped the bot become more context-aware, allowing it to factor in the recent momentum of the market when making decisions. Here’s a simplified example of how this was integrated:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Calculate recent volume trend</span>
latest_data[<span class="hljs-string">'volume_change'</span>] = latest_data[<span class="hljs-string">'volume'</span>].pct_change(periods=<span class="hljs-number">3</span>)

<span class="hljs-comment"># Calculate rolling volatility</span>
latest_data[<span class="hljs-string">'volatility'</span>] = latest_data[<span class="hljs-string">'close'</span>].rolling(window=<span class="hljs-number">10</span>).std()
</code></pre>
<p>These additional metrics allowed the model to weigh recent market conditions more heavily, giving it a better edge in predicting buy and sell opportunities.</p>
<h3 id="heading-overcoming-errors-and-building-robustness">Overcoming Errors and Building Robustness</h3>
<p>Throughout this process, I faced multiple errors. The infamous "feature mismatch" error returned multiple times, and the challenge of handling <code>NaN</code> values or incomplete data meant I had to be meticulous. ChatGPT was instrumental in suggesting techniques such as filling missing values with zeros or calculating forward fill methods to handle incomplete data, ensuring no features were left undefined.</p>
<p>Here is an example of how I handled missing values to avoid model errors:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Handle missing values</span>
features_df = features_df.fillna(<span class="hljs-number">0</span>)  <span class="hljs-comment"># Replace NaN values with zero to avoid prediction errors</span>
</code></pre>
<p>ChatGPT's approach to problem-solving here was iterative: each time an error was encountered, it would analyse the traceback, suggest corrections, and propose enhancements. It felt like a collaborative debugging session, and over time, these iterative fixes made my bot much more resilient to runtime issues.</p>
<h3 id="heading-the-result">The Result</h3>
<p>After many rounds of testing and troubleshooting, the new trading script, combined with the enhanced model, was finally able to operate smoothly. The bot could now consistently make predictions using all 17 features, providing a more comprehensive analysis of the market. The integration of recent market activity indicators added an extra layer of context, further improving the bot's decision-making abilities.</p>
<p>The improvements in prediction accuracy were evident—the bot was now making fewer false-positive trades and identifying profitable opportunities with more reliability. The journey was challenging, but having ChatGPT as a guide made all the difference, providing insights and coding solutions whenever I encountered roadblocks.</p>
<p>Stay tuned for the next article, where I'll share how I worked on optimizing the bot's transaction logic, minimizing transaction costs, and further improving overall profitability.</p>
]]></content:encoded></item><item><title><![CDATA[print(result) "Part 5 of NotADev"]]></title><description><![CDATA[Advancing the Model with New Features and Overcoming Challenges
As I continue my journey to enhance my trading bot, I've reached a significant milestone—the integration of new machine learning features into my model. This has been both an exciting an...]]></description><link>https://unchained.pxng0lin.xyz/result-part-5-of-notadev</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/result-part-5-of-notadev</guid><category><![CDATA[algorithms]]></category><category><![CDATA[tradingbot]]></category><category><![CDATA[Developer]]></category><category><![CDATA[Python]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Fri, 01 Nov 2024 06:00:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730992421420/504975eb-e9ae-4853-a6ac-d996b43d4836.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-advancing-the-model-with-new-features-and-overcoming-challenges"><strong>Advancing the Model with New Features and Overcoming Challenges</strong></h1>
<p>As I continue my journey to enhance my trading bot, I've reached a significant milestone—the integration of new machine learning features into my model. This has been both an exciting and challenging phase, where the focus has been on improving the prediction accuracy of buy and sell signals. In this post, I'll share the details of how I worked with ChatGPT to incorporate Recursive Feature Elimination (RFE), XGBoost, and hyperparameter tuning, and what these changes mean for the bot's performance.</p>
<h3 id="heading-integrating-rfe-xgboost-and-hyperparameter-tuning">Integrating RFE, XGBoost, and Hyperparameter Tuning</h3>
<p>The journey started with a realization: the existing model's feature set wasn't providing enough granularity for accurate trading decisions. To address this, I asked ChatGPT to suggest improvements. It decided to add new features and apply RFE to determine which ones were most important. The goal was simple—let's keep only what truly matters. This led to an extensive feature engineering process that added indicators like Average Directional Index (ADX), Rate of Change (ROC), Momentum (MOM), and many others.</p>
<p>Once these features were ready, ChatGPT used RFE to identify which ones significantly contributed to model performance. This reduced the feature set to the most relevant indicators, which in turn helped improve model training efficiency and reduce overfitting.</p>
<p>Here's a snippet showing how RFE was applied to the model:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> sklearn.feature_selection <span class="hljs-keyword">import</span> RFE
<span class="hljs-keyword">from</span> sklearn.ensemble <span class="hljs-keyword">import</span> RandomForestClassifier

<span class="hljs-comment"># Initializing the model for feature selection</span>
model_for_rfe = RandomForestClassifier(random_state=<span class="hljs-number">42</span>)

<span class="hljs-comment"># Selecting top 10 features using RFE</span>
rfe = RFE(model_for_rfe, n_features_to_select=<span class="hljs-number">10</span>)
X_train_rfe = rfe.fit_transform(X_train, y_train)
</code></pre>
<p>Next came XGBoost—a powerful tool in the machine learning arsenal. I asked ChatGPT if it could improve the model further, and it suggested experimenting with XGBoost alongside the traditional Random Forest approach, evaluating which performed better with the trading dataset. It also recommended hyperparameter tuning, using GridSearchCV to test different combinations for the Random Forest model and find the optimal setup. The resulting model showed noticeable improvements in performance metrics, boosting the accuracy of predictions and helping with more informed trading decisions.</p>
<p>Here's an example of how GridSearchCV was used for hyperparameter tuning:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> GridSearchCV

param_grid = {
    <span class="hljs-string">'n_estimators'</span>: [<span class="hljs-number">100</span>, <span class="hljs-number">200</span>],
    <span class="hljs-string">'max_depth'</span>: [<span class="hljs-number">10</span>, <span class="hljs-number">20</span>],
    <span class="hljs-string">'min_samples_split'</span>: [<span class="hljs-number">2</span>, <span class="hljs-number">5</span>],
}

rf = RandomForestClassifier(random_state=<span class="hljs-number">42</span>)
grid_search = GridSearchCV(rf, param_grid, cv=<span class="hljs-number">3</span>, n_jobs=<span class="hljs-number">-1</span>, verbose=<span class="hljs-number">2</span>)
grid_search.fit(X_train_rfe, y_train)

<span class="hljs-comment"># Best model after hyperparameter tuning</span>
best_rf = grid_search.best_estimator_
</code></pre>
<h3 id="heading-challenges-faced-and-lessons-learned">Challenges Faced and Lessons Learned</h3>
<p>This process wasn't without its challenges. Integrating these new features meant dealing with various compatibility issues between the training script and the prediction logic in the trading script. For example, after training the model with 17 features, I encountered repeated errors when using the model to make predictions in real-time trading.</p>
<p>One of the most frustrating errors was the infamous "feature mismatch" problem. The model expected a specific order and set of features, but the data I was providing during predictions was either incorrectly ordered or incomplete.</p>
<p>To solve this, ChatGPT suggested ensuring strict consistency between the feature set used during training and the real-time feature extraction in the trading script. This was a valuable lesson—the importance of keeping feature engineering consistent across all stages of model development.</p>
<p>Here's how I ensured feature consistency during prediction:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Extracting features in the same order as during training</span>
features_df = pd.DataFrame([features], columns=FEATURE_COLUMNS)
features_df = features_df.fillna(<span class="hljs-number">0</span>)

<span class="hljs-comment"># Making prediction</span>
prediction_proba = model.predict_proba(features_df)
buy_proba = prediction_proba[<span class="hljs-number">0</span>][<span class="hljs-number">1</span>]
sell_proba = prediction_proba[<span class="hljs-number">0</span>][<span class="hljs-number">0</span>]
</code></pre>
<h3 id="heading-the-impact-on-prediction-accuracy">The Impact on Prediction Accuracy</h3>
<p>These changes have had a substantial impact on the bot's performance. The new features have provided more depth for understanding market movements. RFE and XGBoost allowed us to focus on what's truly important while hyperparameter tuning made the model leaner and more effective. In practice, this means the bot is now better at identifying potential opportunities and minimizing false signals—a crucial aspect of successful algorithmic trading.</p>
<p>This stage of development marked a turning point, where the focus shifted from simply making predictions to making accurate, reliable predictions that could translate into profitable trades. While the journey is far from over, I'm excited about the improvements so far and ready to tackle the next challenges that arise.</p>
<p>Stay tuned for the next update, where I'll share how ChatGPT helped me align the trading script to properly consume the upgraded model features and how I managed to make it all work together seamlessly.</p>
<p>pxng0lin.</p>
]]></content:encoded></item><item><title><![CDATA[print(result) "Part 4 of NotADev"]]></title><description><![CDATA[Introducing Machine Learning
Now that the data was enriched with technical indicators and lag features, it was time to build a predictive model to forecast stock movements.

💭
I’ve worked on predictable models for decades, mainly around services, cu...]]></description><link>https://unchained.pxng0lin.xyz/result-part-4-of-notadev</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/result-part-4-of-notadev</guid><category><![CDATA[Developer]]></category><category><![CDATA[Python]]></category><category><![CDATA[AI]]></category><category><![CDATA[#stockanalysis]]></category><category><![CDATA[#shares]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Thu, 24 Oct 2024 23:00:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1727991766878/4ddca725-b1a8-4531-9e6f-8d5f1c9f8c92.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introducing-machine-learning"><strong>Introducing Machine Learning</strong></h2>
<p>Now that the data was enriched with technical indicators and lag features, it was time to build a predictive model to forecast stock movements.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💭</div>
<div data-node-type="callout-text">I’ve worked on predictable models for decades, mainly around services, customers or debt, in the telecommunications industry mostly. The stock market, and the crypto market even more so, aren’t easy eggs to crack, and I’m no Jim Simons, but, I did want an element of prediction to help assist the technical indicators so I could get a reasonable inkling of the share price to be, and hopefully catch more wins than losses.</div>
</div>

<hr />
<h3 id="heading-building-the-predictive-model"><strong>Building the Predictive Model</strong></h3>
<p>The AI assistant suggested using <strong>XGBoost</strong>, a powerful and efficient gradient boosting algorithm that's well-suited for tabular data.</p>
<blockquote>
<h2 id="heading-what-is-xgboost-in-machine-learning"><strong>What is XGBoost in Machine Learning?</strong></h2>
<p><a target="_blank" href="https://www.analyticsvidhya.com/blog/2018/09/an-end-to-end-guide-to-understand-the-math-behind-xgboost/#:~:text=XGBoost%20builds%20a%20predictive%20model,made%20by%20the%20existing%20ones.">XGBoost, or eXtreme Gradient Boosting, is a XGBoost algorithm in machine</a> learning algorithm under ensemble learning. It is trendy for supervised learning tasks, such as regression and classification. XGBoost builds a predictive model by combining the predictions of multiple individual models, often decision trees, in an iterative manner.</p>
<p>The algorithm works by sequentially adding weak learners to the ensemble, with each new learner focusing on correcting the errors made by the existing ones. It uses a gradient descent optimization technique to minimize a predefined loss function during training.</p>
<p>Key features of XGBoost Algorithm include its ability to handle complex relationships in data, regularization techniques to prevent overfitting and incorporation of parallel processing for efficient computation.</p>
<p>source: <a target="_blank" href="https://www.analyticsvidhya.com/blog/2018/09/an-end-to-end-guide-to-understand-the-math-behind-xgboost/#:~:text=XGBoost%20builds%20a%20predictive%20model,made%20by%20the%20existing%20ones.">What is the XGBoost algorithm and how does it work? (</a><a target="_blank" href="http://analyticsvidhya.com">analyticsvidhya.com</a><a target="_blank" href="https://www.analyticsvidhya.com/blog/2018/09/an-end-to-end-guide-to-understand-the-math-behind-xgboost/#:~:text=XGBoost%20builds%20a%20predictive%20model,made%20by%20the%20existing%20ones.">)</a></p>
</blockquote>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> xgboost <span class="hljs-keyword">import</span> XGBClassifier

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">prepare_data</span>(<span class="hljs-params">data</span>):</span>
    <span class="hljs-comment"># Define the target variable</span>
    data[<span class="hljs-string">'Future_Return'</span>] = (data[<span class="hljs-string">'Close'</span>].shift(<span class="hljs-number">-1</span>) - data[<span class="hljs-string">'Close'</span>]) / data[<span class="hljs-string">'Close'</span>]
    data[<span class="hljs-string">'Target'</span>] = (data[<span class="hljs-string">'Future_Return'</span>] &gt; <span class="hljs-number">0</span>).astype(int)
    data.dropna(inplace=<span class="hljs-literal">True</span>)

    <span class="hljs-comment"># Select features</span>
    features = data.drop([<span class="hljs-string">'Target'</span>, <span class="hljs-string">'Future_Return'</span>, <span class="hljs-string">'Close'</span>], axis=<span class="hljs-number">1</span>).columns
    X = data[features]
    y = data[<span class="hljs-string">'Target'</span>]
    <span class="hljs-keyword">return</span> X, y
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🛑</div>
<div data-node-type="callout-text"><strong>Challenge:</strong> When splitting the data into training and testing sets, it initially used random shuffling, which was not appropriate for time series data as it breaks the temporal order - Obviously, the AI gave me the reason after I ran into errors.</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>AI's Solution:</strong> The AI recommended using a time series split to preserve the sequence of data.</div>
</div>

<pre><code class="lang-python"><span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> TimeSeriesSplit

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">train_model</span>(<span class="hljs-params">X, y</span>):</span>
    tscv = TimeSeriesSplit(n_splits=<span class="hljs-number">5</span>)
    model = XGBClassifier(use_label_encoder=<span class="hljs-literal">False</span>, eval_metric=<span class="hljs-string">'logloss'</span>)
    <span class="hljs-keyword">for</span> train_index, test_index <span class="hljs-keyword">in</span> tscv.split(X):
        X_train, X_test = X.iloc[train_index], X.iloc[test_index]
        y_train, y_test = y.iloc[train_index], y.iloc[test_index]
        model.fit(X_train, y_train)
    <span class="hljs-keyword">return</span> model
</code></pre>
<hr />
<div data-node-type="callout">
<div data-node-type="callout-emoji">ℹ</div>
<div data-node-type="callout-text">A little insight into train and test.</div>
</div>

<h3 id="heading-train-and-test-data">Train and Test Data</h3>
<p>When building a machine learning model, it’s crucial to evaluate its performance on unseen data. This is where the concepts of train and test data come into play.</p>
<ol>
<li><p><strong>Training Data</strong>:</p>
<ul>
<li><p><strong>Purpose</strong>: Used to train the model.</p>
</li>
<li><p><strong>Process</strong>: The model learns patterns, relationships, and features from this data.</p>
</li>
<li><p><strong>Example</strong>: If you have a dataset of house prices, the training data would include features like the number of rooms, location, and the corresponding house prices.</p>
</li>
</ul>
</li>
<li><p><strong>Test Data</strong>:</p>
<ul>
<li><p><strong>Purpose</strong>: Used to evaluate the model’s performance.</p>
</li>
<li><p><strong>Process</strong>: After training, the model makes predictions on the test data, and these predictions are compared to the actual values to assess accuracy.</p>
</li>
<li><p><strong>Example</strong>: Continuing with the house prices example, the test data would also include features like the number of rooms and location, but the model would predict the house prices, which are then compared to the actual prices.</p>
</li>
</ul>
</li>
</ol>
<hr />
<h3 id="heading-using-xgboost-with-train-and-test-data">Using XGBoost with Train and Test Data</h3>
<p>Here’s a quick example guide to using XGBoost with train and test data:</p>
<ol>
<li><p><strong>Import Libraries</strong>:</p>
<pre><code class="lang-python"> <span class="hljs-keyword">import</span> xgboost <span class="hljs-keyword">as</span> xgb
 <span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> train_test_split
 <span class="hljs-keyword">from</span> sklearn.metrics <span class="hljs-keyword">import</span> accuracy_score
</code></pre>
</li>
<li><p><strong>Load and Prepare Data</strong>:</p>
<pre><code class="lang-python"> <span class="hljs-comment"># Example using a dataset</span>
 <span class="hljs-keyword">from</span> sklearn.datasets <span class="hljs-keyword">import</span> load_breast_cancer
 data = load_breast_cancer()
 X = data.data
 y = data.target
</code></pre>
</li>
<li><p><strong>Split Data into Train and Test Sets</strong><a target="_blank" href="https://www.bing.com/new#faq">:</a></p>
<pre><code class="lang-python"> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=<span class="hljs-number">0.2</span>, random_state=<span class="hljs-number">42</span>)
</code></pre>
</li>
<li><p><a target="_blank" href="https://www.bing.com/new#faq"><strong>Train the XGBoost Model</strong>:</a></p>
<pre><code class="lang-python"> model = xgb.XGBClassifier()
 model.fit(X_train, y_train)
</code></pre>
</li>
<li><p><strong>Make Predictions on Test Data</strong>:</p>
<pre><code class="lang-python"> y_pred = model.predict(X_test)
</code></pre>
</li>
<li><p><strong>Evaluate the Model</strong>:</p>
<pre><code class="lang-python"> accuracy = accuracy_score(y_test, y_pred)
 print(<span class="hljs-string">f"Accuracy: <span class="hljs-subst">{accuracy * <span class="hljs-number">100</span>:<span class="hljs-number">.2</span>f}</span>%"</span>)
</code></pre>
</li>
</ol>
<h3 id="heading-why-split-datahttpswwwbingcomnewfaq">Why Sp<a target="_blank" href="https://www.bing.com/new#faq">lit Data?</a></h3>
<ul>
<li><p><strong>Avoid Overfitting</strong>: By evaluating the model on unseen data (test data), you can ensure it generalises well and isn’t just memorising the training data.</p>
</li>
<li><p><strong>Model Validation</strong>: It helps in validating the model’s performance and tuning hyperparameters effectively.</p>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Using train and test data is essential for building robust machine learning models. XGBoost, with its powerful capabilities, can efficiently handle this process, ensuring high performance and accuracy.</p>
<p>Resources: <a target="_blank" href="https://www.youtube.com/watch?v=aLOQD66Sj0g">How to train XGBoost models in Python (</a><a target="_blank" href="http://youtube.com">youtube.com</a><a target="_blank" href="https://www.youtube.com/watch?v=aLOQD66Sj0g">)</a></p>
<hr />
<h3 id="heading-handling-overfittinghttpswwwyoutubecomwatchvaloqd66sj0g"><a target="_blank" href="https://www.youtube.com/watch?v=aLOQD66Sj0g"><strong>Handling Overfitting</strong></a></h3>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🛑</div>
<div data-node-type="callout-text"><strong>Challenge:</strong> The model performed exceptionally well on the training data but poorly on the test data, indicating overfitting.</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>AI's Solution:</strong> To combat overfitting, the AI suggested:</div>
</div>

<ul>
<li><p><strong>Hyperparameter Tuning:</strong> Adjusting parameters like <code>max_depth</code>, <code>n_estimators</code>, and <code>learning_rate</code> to find the optimal combination.</p>
</li>
<li><p><strong>Cross-Validation:</strong> Using <code>TimeSeriesSplit</code> to perform cross-validation that respects the temporal order.</p>
</li>
<li><p><strong>Regularisation:</strong> Adding regularisation parameters like <code>reg_alpha</code> and <code>reg_lambda</code> to penalise complex mode<a target="_blank" href="https://www.bing.com/new#faq">l</a>s<a target="_blank" href="https://www.bing.com/new#faq">.</a></p>
</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> RandomizedSearchCV

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">train_model_with_cv</span>(<span class="hljs-params">X, y</span>):</span>
    model = XGBClassifier(use_label_encoder=<span class="hljs-literal">False</span>, eval_metric=<span class="hljs-string">'logloss'</span>)
    param_grid = {
        <span class="hljs-string">'n_estimators'</span>: [<span class="hljs-number">50</span>, <span class="hljs-number">100</span>, <span class="hljs-number">150</span>],
        <span class="hljs-string">'max_depth'</span>: [<span class="hljs-number">3</span>, <span class="hljs-number">5</span>, <span class="hljs-number">7</span>],
        <span class="hljs-string">'learning_rate'</span>: [<span class="hljs-number">0.01</span>, <span class="hljs-number">0.05</span>, <span class="hljs-number">0.1</span>],
        <span class="hljs-string">'subsample'</span>: [<span class="hljs-number">0.8</span>, <span class="hljs-number">1.0</span>],
        <span class="hljs-string">'colsample_bytree'</span>: [<span class="hljs-number">0.8</span>, <span class="hljs-number">1.0</span>],
        <span class="hljs-string">'reg_alpha'</span>: [<span class="hljs-number">0</span>, <span class="hljs-number">0.1</span>, <span class="hljs-number">0.5</span>],
        <span class="hljs-string">'reg_lambda'</span>: [<span class="hljs-number">1</span>, <span class="hljs-number">1.5</span>, <span class="hljs-number">2</span>]
    }
    tscv = TimeSeriesSplit(n_splits=<span class="hljs-number">5</span>)
    grid_search = RandomizedSearchCV(model, param_grid, cv=tscv, scoring=<span class="hljs-string">'accuracy'</span>, n_iter=<span class="hljs-number">10</span>)
    grid_search.fit(X, y)
    <span class="hljs-keyword">return</span> grid_search.best_estimator_
</code></pre>
<p>This approach improved the model's generalisation to unseen data.</p>
<hr />
<h3 id="heading-handling-class-imbalance"><strong>Handling Class Imbalance</strong></h3>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🛑</div>
<div data-node-type="callout-text"><strong>Challenge:</strong> The target variable was imbalanced, with more instances of one class over the other, which can bias the model.</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>AI's Solution:</strong> The AI suggested using <strong>SMOTE</strong> (Synthetic Minority Over-sampling Technique) to balance the classes.</div>
</div>

<blockquote>
<h3 id="heading-what-is-smote">What is SMOTE?</h3>
<p>SMOTE is a technique used to create synthetic samples for the minority class in a dataset. This helps balance the class distribution, which is crucial for training machine learning models effectively on imbalanced data.</p>
<h3 id="heading-how-does-smote-work">How Does SMOTE Work?</h3>
<ol>
<li><p><strong>Identify Minority Class Samples</strong>: SMOTE starts by identifying the samples in the minority class.</p>
</li>
<li><p><strong>Generate Synthetic Samples</strong>: It then generates new synthetic samples by interpolating between existing minority class samples. This is done by selecting two or more similar instances and creating a new instance that lies between them in the feature space.</p>
</li>
<li><p><strong>Add Synthetic Samples to Dataset</strong>: These synthetic samples are added to the dataset, resulting in a more balanced class distribution.</p>
</li>
</ol>
<h3 id="heading-benefits-of-smote">Benefits of SMOTE</h3>
<ul>
<li><p><strong>Improves Model Performance</strong>: By balancing the dataset, models can learn better and perform more accurately on the minority class.</p>
</li>
<li><p><strong>Reduces Overfitting</strong>: Unlike simple oversampling (which duplicates minority class samples), SMOTE reduces the risk of overfitting by creating new, unique samples.</p>
</li>
</ul>
</blockquote>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> imblearn.over_sampling <span class="hljs-keyword">import</span> SMOTE

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">balance_classes</span>(<span class="hljs-params">X, y</span>):</span>
    smote = SMOTE(random_state=<span class="hljs-number">42</span>)
    X_resampled, y_resampled = smote.fit_resample(X, y)
    <span class="hljs-keyword">return</span> X_resampled, y_resampled
</code></pre>
<p>After balancing the classes, the model's performance improved significantly.</p>
<hr />
<h3 id="heading-encountered-error"><strong>Encountered Error:</strong></h3>
<p>While training the model, I ran into an error:</p>
<pre><code class="lang-bash">ValueError: could not convert string to <span class="hljs-built_in">float</span>: <span class="hljs-string">'2024-03-07'</span>.
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>AI's Solution:</strong> The AI pointed out that non-numeric data (like date strings) were included in the feature set. To fix this, we ensured that only numeric columns were used.</div>
</div>

<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">select_numeric_features</span>(<span class="hljs-params">data</span>):</span>
    numeric_cols = data.select_dtypes(include=[np.number]).columns
    <span class="hljs-keyword">return</span> data[numeric_cols]
</code></pre>
<p>By selecting only numeric features, we eliminated the error.</p>
<p><strong>Results so far</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727683026269/70442f0b-153f-48d4-9394-d8702909f568.png" alt class="image--center mx-auto" /></p>
<hr />
<p>Excellent, so we have a working bot, with analysis, back-testing and predictions, but, the accuracy is quite low, so this will be my next item to work on (or rather AI to work on).</p>
<p>pxng0lin.</p>
]]></content:encoded></item><item><title><![CDATA[print(result) "Part 3 of NotADev"]]></title><description><![CDATA[Enriching Data with Technical Indicators
With the stock data successfully fetched and initial error handling in place, it was time to delve deeper into the data to make it more informative for our predictive models. The idea was to enrich the data wi...]]></description><link>https://unchained.pxng0lin.xyz/result-part-3-of-notadev</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/result-part-3-of-notadev</guid><category><![CDATA[Developer]]></category><category><![CDATA[Python]]></category><category><![CDATA[telegram]]></category><category><![CDATA[AI]]></category><category><![CDATA[tradingbot]]></category><category><![CDATA[Trading]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Fri, 18 Oct 2024 09:00:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1728260906626/0023d2a2-e755-4513-90b1-285b43ca890e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-enriching-data-with-technical-indicators"><strong>Enriching Data with Technical Indicators</strong></h2>
<p>With the stock data successfully fetched and initial error handling in place, it was time to delve deeper into the data to make it more informative for our predictive models. The idea was to enrich the data with <strong>technical indicators</strong>—tools that traders use to analyse past market data to predict future price movements.</p>
<hr />
<h3 id="heading-calculating-technical-indicators"><strong>Calculating Technical Indicators</strong></h3>
<p>The AI assistant suggested utilising the <code>ta</code> library, a comprehensive technical analysis library in Python. This library provides a wide range of technical indicators ready to be used with minimal setup.</p>
<p>We aimed to calculate several key indicators:</p>
<ul>
<li><p><strong>Simple Moving Average (SMA)</strong></p>
</li>
<li><p><strong>Exponential Moving Average (EMA)</strong></p>
</li>
<li><p><strong>Relative Strength Index (RSI)</strong></p>
</li>
<li><p><strong>Moving Average Convergence Divergence (MACD)</strong></p>
</li>
<li><p><strong>Bollinger Bands</strong></p>
</li>
<li><p><strong>Average True Range (ATR)</strong></p>
</li>
<li><p><strong>On-Balance Volume (OBV)</strong></p>
</li>
</ul>
<p>Here's how we implemented the function to add these indicators:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> ta

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">add_technical_indicators</span>(<span class="hljs-params">data</span>):</span>
    <span class="hljs-comment"># Simple Moving Average</span>
    data[<span class="hljs-string">'SMA'</span>] = ta.trend.SMAIndicator(data[<span class="hljs-string">'Close'</span>], window=<span class="hljs-number">14</span>).sma_indicator()

    <span class="hljs-comment"># Exponential Moving Average</span>
    data[<span class="hljs-string">'EMA'</span>] = ta.trend.EMAIndicator(data[<span class="hljs-string">'Close'</span>], window=<span class="hljs-number">14</span>).ema_indicator()

    <span class="hljs-comment"># Relative Strength Index</span>
    data[<span class="hljs-string">'RSI'</span>] = ta.momentum.RSIIndicator(data[<span class="hljs-string">'Close'</span>], window=<span class="hljs-number">14</span>).rsi()

    <span class="hljs-comment"># MACD</span>
    macd = ta.trend.MACD(data[<span class="hljs-string">'Close'</span>])
    data[<span class="hljs-string">'MACD'</span>] = macd.macd()
    data[<span class="hljs-string">'MACD_Signal'</span>] = macd.macd_signal()

    <span class="hljs-comment"># Bollinger Bands</span>
    bb = ta.volatility.BollingerBands(data[<span class="hljs-string">'Close'</span>], window=<span class="hljs-number">20</span>, window_dev=<span class="hljs-number">2</span>)
    data[<span class="hljs-string">'BB_High'</span>] = bb.bollinger_hband()
    data[<span class="hljs-string">'BB_Low'</span>] = bb.bollinger_lband()

    <span class="hljs-comment"># Average True Range</span>
    data[<span class="hljs-string">'ATR'</span>] = ta.volatility.AverageTrueRange(data[<span class="hljs-string">'High'</span>], data[<span class="hljs-string">'Low'</span>], data[<span class="hljs-string">'Close'</span>], window=<span class="hljs-number">14</span>).average_true_range()

    <span class="hljs-comment"># On-Balance Volume</span>
    data[<span class="hljs-string">'OBV'</span>] = ta.volume.OnBalanceVolumeIndicator(data[<span class="hljs-string">'Close'</span>], data[<span class="hljs-string">'Volume'</span>]).on_balance_volume()

    <span class="hljs-comment"># Drop initial rows with NaN values</span>
    data.dropna(inplace=<span class="hljs-literal">True</span>)
    <span class="hljs-keyword">return</span> data
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🛑</div>
<div data-node-type="callout-text"><strong>Challenge:</strong> Upon executing the function, I encountered a significant number of <code>NaN</code> values, particularly at the beginning of the dataset. This was expected since some indicators require a certain number of periods to calculate their values.</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>AI's Solution:</strong> The AI advised that after adding the indicators, I should drop the initial rows containing <code>NaN</code> values to clean the dataset.</div>
</div>

<pre><code class="lang-python">data.dropna(inplace=<span class="hljs-literal">True</span>)
</code></pre>
<p>This adjustment ensured that the dataset was free of missing values and ready for further analysis.</p>
<p><strong>Additional Consideration:</strong> To make the dataset even richer, the AI suggested adding more technical indicators like <strong>Momentum</strong>, <strong>Chaikin Money Flow (CMF)</strong>, and <strong>Money Flow Index (MFI)</strong>.</p>
<pre><code class="lang-python"><span class="hljs-comment"># Momentum</span>
data[<span class="hljs-string">'Momentum'</span>] = data[<span class="hljs-string">'Close'</span>].diff(<span class="hljs-number">4</span>)

<span class="hljs-comment"># Chaikin Money Flow</span>
data[<span class="hljs-string">'CMF'</span>] = ta.volume.ChaikinMoneyFlowIndicator(
    high=data[<span class="hljs-string">'High'</span>], low=data[<span class="hljs-string">'Low'</span>], close=data[<span class="hljs-string">'Close'</span>], volume=data[<span class="hljs-string">'Volume'</span>], window=<span class="hljs-number">20</span>
).chaikin_money_flow()

<span class="hljs-comment"># Money Flow Index</span>
data[<span class="hljs-string">'MFI'</span>] = ta.volume.MFIIndicator(
    high=data[<span class="hljs-string">'High'</span>], low=data[<span class="hljs-string">'Low'</span>], close=data[<span class="hljs-string">'Close'</span>], volume=data[<span class="hljs-string">'Volume'</span>], window=<span class="hljs-number">14</span>
).money_flow_index()
</code></pre>
<h3 id="heading-feature-engineering-with-lag-features"><strong>Feature Engineering with Lag Features</strong></h3>
<p>To capture temporal dependencies and provide the model with more context, we decided to create <strong>lag features</strong>. Lag features are previous time steps' values of a time series, which can help the model understand how past values influence future ones.</p>
<blockquote>
<p>Temporal dependencies refer to the relationships and patterns between data points in a time series, where the value at a given time is influenced by its previous values. These dependencies are crucial in time series analysis and forecasting, as they help in understanding how past events affect future outcomes.</p>
</blockquote>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">create_lag_features</span>(<span class="hljs-params">data, numeric_cols, lags</span>):</span>
    <span class="hljs-keyword">for</span> col <span class="hljs-keyword">in</span> numeric_cols:
        <span class="hljs-keyword">for</span> lag <span class="hljs-keyword">in</span> lags:
            data[<span class="hljs-string">f'<span class="hljs-subst">{col}</span>_lag<span class="hljs-subst">{lag}</span>'</span>] = data[col].shift(lag)
    data.dropna(inplace=<span class="hljs-literal">True</span>)
    <span class="hljs-keyword">return</span> data
</code></pre>
<p>We applied this function to our data, specifying the numeric columns and the number of lags we wanted to create (e.g., 1, 2).</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🛑</div>
<div data-node-type="callout-text"><strong>Challenge:</strong> After adding lag features, the dataset size increased significantly, and I noticed potential multicollinearity between the original features and their lagged counterparts.</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>AI's Solution:</strong> The AI suggested conducting a <strong>correlation analysis</strong> to identify and remove highly correlated features. By calculating the correlation matrix and setting a threshold, we could drop one of the highly correlated pairs to reduce redundancy.</div>
</div>

<blockquote>
<p>Multicollinearity refers to a statistical phenomenon in which two or more predictor variables in a multiple regression model are highly correlated, meaning that one can be linearly predicted from the others with a substantial degree of accuracy. This can cause problems in estimating the coefficients of the regression model, leading to unreliable and unstable estimates. Multicollinearity can inflate the variance of the coefficient estimates and make it difficult to determine the individual effect of each predictor variable on the dependent variable.</p>
</blockquote>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">remove_highly_correlated_features</span>(<span class="hljs-params">data, threshold=<span class="hljs-number">0.9</span></span>):</span>
    corr_matrix = data.corr().abs()
    upper_tri = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=<span class="hljs-number">1</span>).astype(bool))
    to_drop = [column <span class="hljs-keyword">for</span> column <span class="hljs-keyword">in</span> upper_tri.columns <span class="hljs-keyword">if</span> any(upper_tri[column] &gt; threshold)]
    data.drop(columns=to_drop, inplace=<span class="hljs-literal">True</span>)
    <span class="hljs-keyword">return</span> data
</code></pre>
<p>Implementing this function helped in reducing the feature set to a more manageable size while retaining the most informative variables.</p>
<hr />
<p>So far so good! I started to push the data out to my Telegram, bearing in mind, the entirety of the code is generated each time, so the model works and produces results. Changes come from me noticing issues with the output or errors that come up in the terminal.</p>
<p><strong>Output of signals to Telegram</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727485488747/2969fb40-ff16-4c46-a46f-1ee8577ec9e4.png" alt="trading signals for tickers sent to Telegram chat." class="image--center mx-auto" /></p>
<p>That’s it for this week, see you on the next one.</p>
<p>pxng0lin.</p>
]]></content:encoded></item><item><title><![CDATA[print(result) "Part 2 of NotADev"]]></title><description><![CDATA[Fetching Stock Data with yFinance
With the idea in place and my setup ready, it was time to start coding—or, more accurately, instructing AI to code for me.

Getting the Data
I needed historical stock data. The AI suggested using the yfinance library...]]></description><link>https://unchained.pxng0lin.xyz/print-result-part-2-of-notadev</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/print-result-part-2-of-notadev</guid><category><![CDATA[Developer]]></category><category><![CDATA[AI]]></category><category><![CDATA[Python]]></category><category><![CDATA[tradingbot]]></category><category><![CDATA[Trading]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Thu, 10 Oct 2024 23:00:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1728260892433/4101ba74-2064-47d5-99af-70c0beb83255.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-fetching-stock-data-with-yfinance"><strong>Fetching Stock Data with yFinance</strong></h2>
<p>With the idea in place and my setup ready, it was time to start coding—or, more accurately, instructing AI to code for me.</p>
<hr />
<h3 id="heading-getting-the-data"><strong>Getting the Data</strong></h3>
<p>I needed historical stock data. The AI suggested using the <code>yfinance</code> library, which is a reliable source for stock market data.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> yfinance <span class="hljs-keyword">as</span> yf

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_stock_data</span>(<span class="hljs-params">ticker, interval=<span class="hljs-string">'1d'</span>, period=<span class="hljs-string">'5y'</span></span>):</span>
    stock = yf.Ticker(ticker)
    data = stock.history(interval=interval, period=period)
    <span class="hljs-keyword">return</span> data
</code></pre>
<p>It decided to fetch data for companies in the S&amp;P 500's, I added a refinement of technology, energy and utilities sectors. It used Wikipedia's list and extracted the tickers I was interested in.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🛑</div>
<div data-node-type="callout-text"><strong>Challenge:</strong> Some tickers returned empty dataframes or had missing data.</div>
</div>

<p>After running the initial version, I realized that for some companies, especially smaller ones or those less actively traded, the data returned was sparse or even nonexistent. This would obviously create issues for the machine learning models down the line.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>AI's Solution:</strong> Implement error handling and logging to skip tickers with insufficient data.</div>
</div>

<pre><code class="lang-python"><span class="hljs-keyword">import</span> logging

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_stock_data</span>(<span class="hljs-params">ticker, interval=<span class="hljs-string">'1d'</span>, period=<span class="hljs-string">'5y'</span></span>):</span>
    stock = yf.Ticker(ticker)
    <span class="hljs-keyword">try</span>:
        data = stock.history(interval=interval, period=period)
        <span class="hljs-keyword">if</span> data.empty:
            logging.warning(<span class="hljs-string">f"No data for <span class="hljs-subst">{ticker}</span>"</span>)
            <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>
        <span class="hljs-keyword">return</span> data
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        logging.error(<span class="hljs-string">f"Error fetching data for <span class="hljs-subst">{ticker}</span>: <span class="hljs-subst">{e}</span>"</span>)
        <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>
</code></pre>
<p>This modification allowed the script to log warnings or errors for tickers with issues and proceed with the rest, enhancing the robustness of the data fetching process.</p>
<hr />
<h3 id="heading-asynchronous-data-fetching"><strong>Asynchronous Data Fetching</strong></h3>
<p>Fetching data for multiple tickers sequentially was time-consuming. The AI assistant recommended using asynchronous programming with <code>asyncio</code> to speed up the process.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> asyncio

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">fetch_all_data</span>(<span class="hljs-params">tickers, interval=<span class="hljs-string">'1d'</span>, period=<span class="hljs-string">'5y'</span></span>):</span>
    data = {}
    <span class="hljs-keyword">for</span> ticker <span class="hljs-keyword">in</span> tickers:
        stock_data = <span class="hljs-keyword">await</span> asyncio.to_thread(get_stock_data, ticker, interval, period)
        <span class="hljs-keyword">if</span> stock_data <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-literal">None</span>:
            data[ticker] = stock_data
    <span class="hljs-keyword">return</span> data
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🛑</div>
<div data-node-type="callout-text"><strong>Challenge:</strong> Initially, I encountered the error <code>TypeError: cannot unpack non-iterable coroutine object</code>. This error occurred because I wasn't handling the asynchronous functions properly.</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>AI's Solution:</strong> The AI explained that I needed to ensure that any function that involves <code>await</code> is correctly defined as <code>async</code>, and that I should properly await coroutine objects.</div>
</div>

<p><strong>Corrected Code:</strong></p>
<pre><code class="lang-bash">pythonCopy codeasync def fetch_all_data(tickers, interval=<span class="hljs-string">'1d'</span>, period=<span class="hljs-string">'5y'</span>):
    tasks = [asyncio.to_thread(get_stock_data, ticker, interval, period) <span class="hljs-keyword">for</span> ticker <span class="hljs-keyword">in</span> tickers]
    results = await asyncio.gather(*tasks)
    data = {ticker: result <span class="hljs-keyword">for</span> ticker, result <span class="hljs-keyword">in</span> zip(tickers, results) <span class="hljs-keyword">if</span> result is not None}
    <span class="hljs-built_in">return</span> data
</code></pre>
<p>This adjustment fixed the error, allowing for efficient, concurrent data fetching.</p>
<hr />
<h3 id="heading-scheduling-with-crontab"><strong>Scheduling with crontab</strong></h3>
<p>To automate the bot's execution, I used <code>crontab</code> on my Linode instance to schedule it to run daily. This way, the bot would fetch new data and perform analysis every day without manual intervention.</p>
<pre><code class="lang-bash">codecrontab -e
<span class="hljs-comment"># Add the following line to run the bot every day at 00:10 AM</span>
10 0 * * * /usr/bin/python3 /path/to/the/bot.py
</code></pre>
<p>This ensures that the bot fetches new data and performs analysis every day without manual intervention.</p>
<pre><code class="lang-bash">INFO:__main__:Telegram message sent.
INFO:__main__:Analyzing 91 tickers.
INFO:__main__:Fetched data <span class="hljs-keyword">for</span> AKAM from yfinance, Data Shape: (262, 7)
INFO:__main__:Fetched data <span class="hljs-keyword">for</span> ADBE from yfinance, Data Shape: (262, 7)
INFO:__main__:Fetched data <span class="hljs-keyword">for</span> ACN from yfinance, Data Shape: (262, 7)
INFO:__main__:Fetched data <span class="hljs-keyword">for</span> AMD from yfinance, Data Shape: (262, 7)
INFO:__main__:Fetched data <span class="hljs-keyword">for</span> APH from yfinance, Data Shape: (262, 7)
INFO:__main__:Combined data <span class="hljs-keyword">for</span> AKAM, Data Shape: (262, 139)
INFO:__main__:Combined data <span class="hljs-keyword">for</span> ACN, Data Shape: (262, 142)
INFO:__main__:Combined data <span class="hljs-keyword">for</span> APH, Data Shape: (262, 144)
INFO:__main__:Combined data <span class="hljs-keyword">for</span> AMD, Data Shape: (262, 143)
INFO:__main__:Combined data <span class="hljs-keyword">for</span> ADBE, Data Shape: (262, 143)
</code></pre>
<hr />
<h3 id="heading-preference">Preference</h3>
<p>I run a few things on a schedule, but, particularly with my own bot for a similar concept, I use the <code>.sh</code> file to run the script. This is so I can activate the Python Virtual Environment before running it.</p>
<p>There may be better ways of doing it, but, like I mentioned, I’m not a developer, I just dabble, so I usually fall back to scripts, things I used to create whilst web2 ethical hacking.</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>
<span class="hljs-comment"># change directory</span>
<span class="hljs-built_in">cd</span> /home/REDACTED/REDACTED/testing
<span class="hljs-comment"># activate the virtual environment</span>
<span class="hljs-built_in">source</span> /home/REDACTED/REDACTED/testing/.TEST/bin/activate
<span class="hljs-comment"># run the python script</span>
python3 /home/REDACTED/REDACTED/REDACTED/bot_24_draft.py
<span class="hljs-comment"># Deactivate the virtual environment</span>
deactivate
</code></pre>
<hr />
<p>We will stop here, and I’ll see you on the next one.</p>
<p>pxng0lin.</p>
]]></content:encoded></item><item><title><![CDATA[Article = [i for i in NotADev]]]></title><description><![CDATA[Is this what a dev does?
In this series of articles, I'm going to take you through a little journey that I've decided to label as 'NotADev'.
Why, you may ask? Well, I've dabbled in Python scripts for some years whilst doing web2 ethical hacking and r...]]></description><link>https://unchained.pxng0lin.xyz/i-am-notadev</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/i-am-notadev</guid><category><![CDATA[Developer]]></category><category><![CDATA[Python]]></category><category><![CDATA[AI]]></category><category><![CDATA[tradingbot]]></category><category><![CDATA[stockmarket]]></category><category><![CDATA[trading, ]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Fri, 04 Oct 2024 09:00:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1727480461686/cbfa04ea-b838-40d3-8bc7-9e25a546a092.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-is-this-what-a-dev-does">Is this what a dev does?</h2>
<p>In this series of articles, I'm going to take you through a little journey that I've decided to label as 'NotADev'.</p>
<p>Why, you may ask? Well, I've dabbled in Python scripts for some years whilst doing web2 ethical hacking and recently transitioning into web3 security research. Many moons ago, I was an analyst, building propensity models, forecasting, and performing various methods of customer analysis. So, I still have an interest in data, algorithms, and statistics.</p>
<p>Lately, I've tried my hand at trading, both in crypto and stocks &amp; shares—only casually—but I have my hand in where I can on the stock market.</p>
<p>Over the past few months, I've caught the bug to do some dev work, starting off by building things myself and utilising AI for new ideas or code improvements. Then, Replit came out with Replit Agent, and I jumped at the opportunity to build some apps based on ideas I'd had, and I was truly amazed! Fast forward to today, I'm still using it, but I came across the new 'ChatGPT o1 preview' model and thought I'd test it to improve my already AI-written analysis bot for the stocks and shares market..</p>
<hr />
<h2 id="heading-the-idea">The idea.</h2>
<p>I wanted to come up with a way of trading on the stock market in an automated fashion—saving me time analysing, monitoring prices, and buying or selling, all the usual stuff. So, I aimed to build a bot that would do this and then relay the results back to me via message, hoping to get the bot to trade for me too eventually.</p>
<p>Using free resources for data, ChatGPT (paid) for code generation, Telegram API for sending results, and some personal preferences, I began the quest to build a bot to run in the cloud</p>
<h3 id="heading-requirements">Requirements.</h3>
<ul>
<li><p>Automated extraction of historical stock market data</p>
<ul>
<li><p>Filtered for the technology and energy industries only</p>
</li>
<li><p>Auto selection of most favourable tickers</p>
</li>
</ul>
</li>
<li><p>Incorporate trading indicators for signals</p>
</li>
<li><p>Utilise Machine Learning to predict future prices for weekly and daily intervals.</p>
<ul>
<li><p>Aim for accuracy over 80% (personal preference).</p>
</li>
<li><p>Weekly analysis being the focus, with daily used as a run rate to capture volatility and adjust the weekly if necessary.</p>
</li>
</ul>
</li>
<li><p>Build a back-testing model to test the algorithm/models historically</p>
</li>
<li><p>Save a summary of the results by date for each ticker</p>
</li>
<li><p>Message the signals on a daily basis</p>
</li>
<li><p>NTH (Nice To Have):</p>
<ul>
<li>Bot trading from signals on stock &amp; shares market</li>
</ul>
</li>
</ul>
<hr />
<h2 id="heading-my-setup">My Setup</h2>
<p>A primitive setup I’m sure, but since I’m not an actual developer, It’s suitable for the circumstances.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>What</td><td>Links</td><td>Comments</td></tr>
</thead>
<tbody>
<tr>
<td>VS Code IDE</td><td><a target="_blank" href="https://code.visualstudio.com/">Visual Studio Code</a></td><td>Used for all my “Dev” and web3 security research.</td></tr>
<tr>
<td>ChatGPT  </td></tr>
</tbody>
</table>
</div><p>- o1-preview<br />- o1-mini<br />- 4o | <a target="_blank" href="https://chatgpt.com/">https://chatgpt.com/</a> | This is the AI I'll be using to generate the code, build the models, and algorithm. |
| Python 3 | <a target="_blank" href="https://www.python.org/">Python</a> | Language used for coding. I'm not advanced in this, and I'll lean on the AI for 95% of the code as I'm avoiding building it manually on purpose—also, it's a lot quicker than me. |
| Python Virtual Environment | - <a target="_blank" href="https://docs.python.org/3/library/venv.html">Virtual Environments</a> |  |
| <a target="_blank" href="https://www.freecodecamp.org/news/how-to-setup-virtual-environments-in-python/">How to Install</a> | I always use a virtual environment for my Python coding. I encourage you to do the same—much cleaner, lfewer bugs and conflicts. |  |
| Linode | <a target="_blank" href="https://www.linode.com/lp/refer/?r=3d40af30ff2ebfe91e3c65152a0549da4d774a38">https://www.linode.com</a> | I use this service for cloud instances at an inexpensive cost.  </p>
<p>This is my referral link; you'll receive a $100 60-day credit once you've added a valid payment method to your account |
| Wikipedia | <a target="_blank" href="https://en.wikipedia.org/wiki/List_of_S%26P_500_companies">List of S&amp;P 500 companies - Wikipedia</a> | List of the S&amp;P 500 companies. I chose specific industries that I would trade in. |
| WSL2 | <a target="_blank" href="https://learn.microsoft.com/en-us/windows/wsl/install">Install WSL2</a> | This is my go-to for my setup. I have a Windows laptop, but I use Linux 80% of the time. Linking this to my VS Code IDE is a must—can't work any other way |</p>
<hr />
<p>So, that makes a start, see you in the next one.</p>
<p>pxng0lin.</p>
]]></content:encoded></item><item><title><![CDATA[(1) RAD::Foundry setup and contracts]]></title><description><![CDATA[Forging the base
After setting up Radicle, we need to setup Foundry and start to build our contracts. 'Foundry 101' has a Github repo that we'll utilise for our code to form the base, then, following along make changes where needed. Radicle will repl...]]></description><link>https://unchained.pxng0lin.xyz/1-radfoundry-setup-and-contracts</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/1-radfoundry-setup-and-contracts</guid><category><![CDATA[radicle]]></category><category><![CDATA[foundry]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[Web3]]></category><category><![CDATA[Smart Contracts]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Fri, 14 Jun 2024 05:00:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1717689824107/e70a2139-d14b-446f-8aae-d9a164da03b1.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-forging-the-base">Forging the base</h2>
<p>After setting up Radicle, we need to setup Foundry and start to build our contracts. 'Foundry 101' has a Github repo that we'll utilise for our code to form the base, then, following along make changes where needed. Radicle will replace any add, commits and pushes usually done to Github, lets hope we don't run into too many issues along the way.</p>
<pre><code class="lang-bash">forge init
</code></pre>
<p><strong>Oops.. What's this?!</strong></p>
<p>As we've created a <code>README.md</code> document in the working directory (previous article), we need to use the <code>--force</code> command when initialising Foundry.</p>
<pre><code class="lang-bash">forge init --force
</code></pre>
<p>Viola!</p>
<pre><code class="lang-bash">Target directory is not empty, but `--force` was specified
Initializing /home/pxng0lin/web3/projects/radicle_xyz/rad-foundry-fund-me...
Installing forge-std <span class="hljs-keyword">in</span> /home/pxng0lin/web3/projects/radicle_xyz/rad-foundry-fund-me/lib/forge-std (url: Some(<span class="hljs-string">"https://github.com/foundry-rs/forge-std"</span>), tag: None)
Cloning into <span class="hljs-string">'/home/pxng0lin/web3/projects/radicle_xyz/rad-foundry-fund-me/lib/forge-std'</span>...
remote: Enumerating objects: 2310, <span class="hljs-keyword">done</span>.
remote: Counting objects: 100% (2305/2305), <span class="hljs-keyword">done</span>.
remote: Compressing objects: 100% (805/805), <span class="hljs-keyword">done</span>.
remote: Total 2310 (delta 1534), reused 2145 (delta 1428), pack-reused 5
Receiving objects: 100% (2310/2310), 658.95 KiB | 7.66 MiB/s, <span class="hljs-keyword">done</span>.
Resolving deltas: 100% (1534/1534), <span class="hljs-keyword">done</span>.
    Installed forge-std v1.8.2
    Initialized forge project
</code></pre>
<p>Nothing is really different at this stage, I noticed the <code>.github/workflows</code> was created with the <code>test.yml</code> file, will see if this has any significance later.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716660522462/8130dc3e-8e56-44d7-9c0e-956ece99c996.png" alt="image of directories and files in vs code editor after init" class="image--center mx-auto" /></p>
<p>We make our first copy-paste here now, using the <a target="_blank" href="https://github.com/Cyfrin/remix-fund-me-f23/blob/main/FundMe.sol">Github repo</a> from the course resources.</p>
<ul>
<li><p>Creating a new file named <code>FundMe.sol</code> in the <code>src</code> directory</p>
</li>
<li><p>Then we navigate to the <code>remix-fund-me-f23/FundMe.sol</code> contract in the repo and paste this into our new <code>FundMe.sol</code> file.</p>
</li>
<li><p>Repeat the steps for the second contract <code>PriceConverter.sol</code></p>
</li>
</ul>
<hr />
<h2 id="heading-radicle-push">Radicle push</h2>
<p>Now, since we are using Radicle, I want to test that my changes are working.</p>
<p>Following the Radicle user guide, we will commit and push the changes to my Radicle repo.</p>
<blockquote>
<p>Once you’re finished, add and commit your changes with <code>git add</code> and <code>git commit</code> just as you would when collaborating on any other Git repository. Then use <code>git push rad master</code> to synchronize the changes with your node (be sure to replace <code>master</code> with your default branch, in case that’s not it).</p>
</blockquote>
<p>So for my rad push, I'll be pushing to the <code>main</code>, as this was the branch name I changed to upon initialisation.</p>
<pre><code class="lang-bash">git push rad main
</code></pre>
<p><strong>Error!</strong></p>
<pre><code class="lang-bash">error: error connecting to ssh-agent: Environment variable `SSH_AUTH_SOCK` not found
error: failed to push some refs to <span class="hljs-string">'rad://z43pr3L72n8wT74KSHcty9fEY5JaL/z6Mkg3Tu7aGDn3pLrshRiaCFLQJwEHyFKTCwYFoKDHts1YV2'</span>
</code></pre>
<p>Our first error! It seems not having <code>ssh-agent</code> running is causing issues. I ran the following command to start <code>ssh-agent</code>. The command will check the agent is running, if not, then it will start it; the latter is the case for us.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">eval</span> <span class="hljs-string">"<span class="hljs-subst">$(ssh-agent -s)</span>"</span>
</code></pre>
<p>After this, we run the <code>rad auth</code> command to ensure our private key is added</p>
<pre><code class="lang-bash">rad auth
✓ Passphrase: ********
✓ Radicle key added to ssh-agent
</code></pre>
<p>So, lets try again to push to Radicle... Success!</p>
<pre><code class="lang-bash">git push rad main
✓ Canonical head updated to 9d054f3d641253a3babd505e0b05807e4065e670
✓ Synced with 4 node(s)

  https://app.radicle.xyz/nodes/seed.radicle.garden/rad:z43pr3L72n8wT74KSHcty9fEY5JaL/tree/9d054f3d641253a3babd505e0b05807e4065e670

To rad://z43pr3L72n8wT74KSHcty9fEY5JaL/z6Mkg3Tu7aGDn3pLrshRiaCFLQJwEHyFKTCwYFoKDHts1YV2
   9ff86ae..9d054f3  main -&gt; main
</code></pre>
<hr />
<h2 id="heading-imports-and-dependencies">Imports and Dependencies</h2>
<p>Upon running <code>forge build</code> to compile our new contracts, we get some errors. The course explains why and gives the solution, so we will implement this now.</p>
<p>Firstly, we need the repo that we're going to install from, <a target="_blank" href="https://github.com/smartcontractkit/chainlink-brownie-contracts">Chainlink</a>. In the terminal we use the command <code>forge install smartcontractkit/chainlink-brownie-contracts --no-commit</code> (notice that we don't use the full address).</p>
<p>Now we need to implement, so our contracts can use them, this is done using <code>remappings</code> in the <code>foundry.toml</code> file. The remapping changes <code>@chainlink/contracts</code> to <code>lib/chainlink/chainlink-brownie-contracts/contracts</code> when importing contracts into our project. It's like a shortcut that lets you use a shorter name to refer to specific locations, making it easier to import contracts.</p>
<pre><code class="lang-markdown">[profile.default]
src = "src"
out = "out"
libs = ["lib"]
remappings = ["@chainlink/contracts/=lib/chainlink/chainlink-brownie-contracts/contracts"]

<span class="hljs-section"># See more config options https://github.com/foundry-rs/foundry/blob/master/crates/config/README.md#all-options</span>
</code></pre>
<p>Save the <code>toml</code> file, go to the terminal and run <code>forge build</code>, Success!</p>
<pre><code class="lang-bash">[⠊] Compiling...
[⠢] Compiling 3 files with Solc 0.8.25
[⠆] Solc 0.8.25 finished <span class="hljs-keyword">in</span> 68.45ms
Compiler run successful!
</code></pre>
<p>Lastly, a little alpha from Patrick to help with identifying errors in our contracts easier. Adjusting the <code>NotOwner()</code> custom error and prefixing the contract name.</p>
<pre><code class="lang-solidity"><span class="hljs-function"><span class="hljs-keyword">error</span> <span class="hljs-title">FundMe__NotOwner</span>(<span class="hljs-params"></span>)</span>;
</code></pre>
<p>Ok lets wrap up here, so far so good. I don't want to reinvent the wheel for the course. Rather, this is more to ensure my Radicle repo is capturing my changes, and we've achieved this so far.</p>
]]></content:encoded></item><item><title><![CDATA[(0) I Need Fuel.]]></title><description><![CDATA[I decided to join the newly introduced 'Attackathon' on the Immunefi platform; it's a first for them, and for me. Not only is there a big prize pool to be won, but it's also introducing a new programming language, Sway, which I'm sure will be challen...]]></description><link>https://unchained.pxng0lin.xyz/0-i-need-fuel</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/0-i-need-fuel</guid><category><![CDATA[sway]]></category><category><![CDATA[Fuel]]></category><category><![CDATA[Web3]]></category><category><![CDATA[Smart Contracts]]></category><category><![CDATA[Programming Blogs]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Fri, 07 Jun 2024 05:00:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1717504393183/b25c8e81-57e1-45b7-9617-f2d20fd1712a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I decided to join the newly introduced 'Attackathon' on the <a target="_blank" href="https://immunefi.com/boost/fuel-network-attackathon/">Immunefi</a> platform; it's a first for them, and for me. Not only is there a big prize pool to be won, but it's also introducing a new programming language, Sway, which I'm sure will be challenging and fun to learn... Here's hoping.</p>
<blockquote>
<h2 id="heading-whats-an-attackathon">What's an Attackathon?</h2>
<p>Attackathons are education-based bughunting contests where whitehat hackers compete over a reward pool by submitting impactful bugs in the project's code.</p>
<p><strong>Before the Attackathon</strong><br />Immunefi works with the project to host a security-focused education period, providing top tier education and support to security researchers.</p>
<p><strong>During the Attackathon</strong><br />Whitehats experience ideal bughunting conditions, with direct project support, responsiveness, and duplicate rewards.</p>
<p><strong>After the Attackathon</strong><br />Immunefi spotlights the security accomplishments, with a custom leaderboard, Attackathon findings report, bug fix reviews, and NFT awards.</p>
<p>Ultimately, Attackathons serve to secure projects, develop their security ecosystem, and create new opportunities for whitehats.</p>
<p>source: <a target="_blank" href="https://immunefi.com/academy/fuel-network-attackathon/">Fuel Attackathon | Immunefi</a></p>
</blockquote>
<p>Before going deep into Fuel, I thought I'd have a look and feel for the Sway programming language. Using the <a target="_blank" href="https://fuellabs.github.io/sway/v0.60.0/book/index.html">documentation</a>, I'll get setup and follow the <a target="_blank" href="https://docs.fuel.network/docs/intro/quickstart/">Developer Quickstart Quide</a> for one of the development tutorials.</p>
<hr />
<h2 id="heading-smart-contract-quickstart-fuel-injection-installation">Smart Contract Quickstart: Fuel <s>Injection</s> Installation</h2>
<p><em>NB: I'm using VS Code as my editor, and I will continue my journey with Radicle and use that as my repo instead of Github. Also, I installed the Sway extension as recommended by Fuel.</em></p>
<p>We firstly create a working directory and initialise a git repo.</p>
<pre><code class="lang-bash">mkdir counter-tutorial
git init
</code></pre>
<p>Then we install fuel following the <a target="_blank" href="https://docs.fuel.network/guides/contract-quickstart/">instructions</a></p>
<pre><code class="lang-bash">curl https://install.fuel.network | sh
</code></pre>
<p>Straightforward and didn't run into any issues. I then did the update steps just in case.</p>
<pre><code class="lang-bash">fuelup update
updating the <span class="hljs-string">'latest-x86_64-unknown-linux-gnu'</span> toolchain
[00:00:00] [<span class="hljs-comment">########################################] 4.84 KiB/4.84 KiB (0s) - Download complete                                                                                                   Downloading: forc forc-explore forc-wallet fuel-core fuel-core-keygen </span>

latest updated
  updated components:
  - forc 0.60.0
  - forc-explore 0.28.1
  - forc-wallet 0.7.1
  - fuel-core 0.26.0
  - fuel-core-keygen 0.26.0
</code></pre>
<hr />
<h2 id="heading-the-counter-contract">The counter contract</h2>
<p>So the following command will generate our Sway contract, I guess this is similar to any tutorials that provide pre-built examples to use.</p>
<pre><code class="lang-bash">forc template --template-name counter counter-contract
</code></pre>
<p>After this I can initialise my Radicle repo too.</p>
<pre><code class="lang-bash">rad node start
rad init
</code></pre>
<p><strong>Output:</strong></p>
<ul>
<li><a target="_blank" href="https://app.radicle.xyz/nodes/seed.radicle.garden/rad:z33zCPgNNzPxBBuJMRMjhgGbZHzwt">The Radicle repo</a></li>
</ul>
<pre><code class="lang-bash">rad init

Initializing radicle 👾 repository <span class="hljs-keyword">in</span> /home/pxng0lin/web3/immunefi/fuel/fuel-tutorials..

✓ Name Fuel tutorials
✓ Description Fuel tutorials <span class="hljs-keyword">for</span> development with Sway
✓ Default branch main
✓ Visibility public
✓ Passphrase: [REDACTED*]
✓ Unsealing key...
✓ Repository fuel-tutorials created.

...
</code></pre>
<pre><code class="lang-bash">counter-contract
├── Forc.toml
└── src
    └── main.sw

1 directory, 2 files
</code></pre>
<h3 id="heading-building-compiling-the-contract">Building (compiling) the contract.</h3>
<p>Following the instructions we can now build the contract</p>
<pre><code class="lang-bash">forc build
  Creating a new `Forc.lock` file. (Cause: lock file did not match manifest)
  Removing core
  Removing counter
  Removing std git+https://github.com/fuellabs/sway?tag=v0.31.1<span class="hljs-comment">#c32b0759d25c0b515cbf535f9fb9b8e6fda38ff2</span>
    Adding core
    Adding std git+https://github.com/fuellabs/sway?tag=v0.60.0<span class="hljs-comment">#2f0392ee35a1e4dd80bd8034962d5b4083dfb8b6</span>
   Created new lock file at /home/pxng0lin/web3/immunefi/fuel/fuel-tutorials/counter-contract/Forc.lock
  Finished debug [unoptimized + fuel] target(s) <span class="hljs-keyword">in</span> 3.56s
</code></pre>
<p>Next we will setup a local wallet, this comes along side Fuel so we shouldn't have any issues here either.</p>
<pre><code class="lang-bash">forc wallet new
</code></pre>
<p>Since this is a tutorial I won't make the password complex, I'm more than likely to forget if I do.</p>
<pre><code class="lang-bash">Wallet mnemonic phrase: [REDACTED]
</code></pre>
<p>We will create a new wallet account to get our fuel address using the command <code>forc wallet account new</code>.</p>
<pre><code class="lang-bash">Please enter your wallet password to derive account 1: 
Wallet address: [REDACTED]
</code></pre>
<h3 id="heading-deploying-our-contract">Deploying our contract</h3>
<p>We will now deploy to the testnet by running the command <code>forc deploy --testnet</code>. Once we've provided the password for our wallet we should receive confirmation of deployment.</p>
<p><strong>Error!</strong></p>
<p>What's this? So we are unable to deploy since we have 0 funds in our wallet. So in the tutorial we can get some funds from the <a target="_blank" href="https://faucet-testnet.fuel.network/">faucet</a></p>
<p>After fuelling up we will try that again.</p>
<pre><code class="lang-bash"> forc deploy --testnet
  Finished release [optimized + fuel] target(s) <span class="hljs-keyword">in</span> 3.28s

Please provide the password of your encrypted wallet vault at <span class="hljs-string">"[REDACTED]"</span>: 

---------------------------------------------------------------------------
Account 1: [REDACTED]

Asset ID : f8f8b6283d7fa5b672b530cbb84fcccb4ff8dc40f8176ef4544ddb1f1952ad07
Amount   : 2000049
---------------------------------------------------------------------------

Please provide the index of account to use <span class="hljs-keyword">for</span> signing: 1
Do you agree to sign this transaction with [REDACTED]? [y/N]: y


Contract counter-contract Deployed!

Network: https://testnet.fuel.network
Contract ID: 0x71a2a7efd28e8b80388105cb3bb52ca5e100b21cb034ee6e44a5ffb135f7f361
Deployed <span class="hljs-keyword">in</span> block 0018e86c
</code></pre>
<p><strong>Success!</strong></p>
<p>This finished the tutorial! So I'll also wrap it up here. The next step they have are further tutorials.</p>
<p>I'll have a read through the <a target="_blank" href="https://docs.fuel.network/docs/sway/">Sway documentation</a> to get a better feel of what I'm looking at in the contract, and potentially move on to <a target="_blank" href="https://docs.fuel.network/docs/sway/sway-program-types/">Sway Program Types</a> and build a Predicate following the tutorial; until the next article, thanks for reading.</p>
]]></content:encoded></item><item><title><![CDATA[(0) RAD::Getting started with Radicle]]></title><description><![CDATA[ResourcesLink/ReferenceSource



Introduction to RadicleHow to Replace GitHub with Radicle to Take Ownership of Your Code (youtube.com)Nader Dabit, Youtube

Radicle Guide, setting upRadicle User GuideThe Radicle Team, Radicle website

Foundry, Fund M...]]></description><link>https://unchained.pxng0lin.xyz/radicle-auditooor-setting-up-radicle-for-web3-learning</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/radicle-auditooor-setting-up-radicle-for-web3-learning</guid><category><![CDATA[GitHub]]></category><category><![CDATA[radicle]]></category><category><![CDATA[foundry]]></category><category><![CDATA[WSL]]></category><category><![CDATA[Visual Studio Code]]></category><category><![CDATA[Web3]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Fri, 31 May 2024 05:00:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1716517991628/b186e19f-5a37-4065-9ad5-5385996d7531.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="hn-table">
<table>
<thead>
<tr>
<td>Resources</td><td>Link/Reference</td><td>Source</td></tr>
</thead>
<tbody>
<tr>
<td>Introduction to Radicle</td><td><a target="_blank" href="https://www.youtube.com/watch?v=Y8pulFGOrMw">How to Replace GitHub with Radicle to Take Ownership of Your Code (youtube.com)</a></td><td>Nader Dabit, Youtube</td></tr>
<tr>
<td>Radicle Guide, setting up</td><td><a target="_blank" href="https://radicle.xyz/guides/user#1-getting-started">Radicle User Guide</a></td><td>The Radicle Team, Radicle website</td></tr>
<tr>
<td>Foundry, Fund Me</td><td><a target="_blank" href="https://updraft.cyfrin.io/courses/foundry">https://updraft.cyfrin.io/courses/foundry</a></td><td>Updraft Foundry 101</td></tr>
<tr>
<td>Visual Studio Code</td><td><a target="_blank" href="https://code.visualstudio.com/">Visual Studio Code - Code Editing. Redefined</a></td><td>Visual Studio Website</td></tr>
<tr>
<td>WSL2</td><td>[Install WSL</td><td>Microsoft Learn](https://learn.microsoft.com/en-us/windows/wsl/install)</td></tr>
<tr>
<td>VS Code WSL plugin</td><td><a target="_blank" href="https://code.visualstudio.com/learn/develop-cloud/wsl">Developing in the Windows Subsystem for Linux with Visual Studio Code</a></td><td>Visual Studio Marketplace</td></tr>
<tr>
<td>Radicle plugin</td><td><a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=radicle-ide-plugins-team.radicle">Radicle - Visual Studio Marketplace</a></td><td>Visual Studio Marketplace</td></tr>
</tbody>
</table>
</div><hr />
<h1 id="heading-who-am-ihttpsmarketplacevisualstudiocomitemsitemnameradicle-ide-plugins-teamradicle"><a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=radicle-ide-plugins-team.radicle">Who Am I?</a></h1>
<div data-node-type="callout">
<div data-node-type="callout-emoji">👀</div>
<div data-node-type="callout-text"><a target="_blank" href="https://pxng0lin.hashnode.dev/pxng0linlaptop-echo-aboutme">Check my '$AboutMe' article</a></div>
</div>

<hr />
<h1 id="heading-purpose-of-activity">Purpose of activity?</h1>
<p>I've used Github for quite some time, I'm no expert of course, but since I got into web2 hacking many moons ago, then in the past few years (as of May 2024) web3, I've been using it mainly for security research, a few python scripts, and early web3 learning (at the moment with Cyfrin's Updraft learning platform, go check it out <a target="_blank" href="https://updraft.cyfrin.io">here</a>).</p>
<p>But, more recently I was hoping to build more of a portfolio whilst learning and stumbled across the video by Nader. Since decentralisation is thrown around frequently, even though it is relevant and has meaning; I thought I would steer more towards that with my own projects and see how it goes.</p>
<hr />
<h2 id="heading-getting-setup">Getting setup</h2>
<p>I already have WSL2 setup on my Windows laptop, and the VS Code editor installed - including the plugin to connect to WSL and the Radicle plugin. So, I'll only need to follow the setup of the Radicle repo from here.</p>
<p><strong>Installation as per the instructions from the guide:</strong></p>
<pre><code class="lang-bash">curl -sSf https://radicle.xyz/install | sh
</code></pre>
<p><strong>Output:</strong></p>
<pre><code class="lang-bash"> Welcome to Radicle

Detecting operating system...
Downloading https://files.radicle.xyz/releases/latest/radicle-x86_64-unknown-linux-musl.tar.xz...
<span class="hljs-comment">######################################################################## 100.0%</span>
Downloading https://files.radicle.xyz/releases/latest/radicle-x86_64-unknown-linux-musl.tar.xz.sig...
<span class="hljs-comment">######################################################################## 100.0%</span>
Verifying radicle-x86_64-unknown-linux-musl.tar.xz...
Good <span class="hljs-string">"file"</span> signature <span class="hljs-keyword">for</span> cloudhead with ED25519 key SHA256:iTDjRHSIaoL8dpHbQ0mv+y0IQqPufGl2hQwk4TbXFlw
Installing Radicle into /home/pxng0lin/.radicle...
Configuring path variable <span class="hljs-keyword">in</span> ~/.bashrc...

✓ Radicle 1.0.0-rc.9 was installed successfully.

Before running Radicle <span class="hljs-keyword">for</span> the first time,
run `<span class="hljs-built_in">source</span> ~/.bashrc` or open a new terminal.

Then, create your Radicle key pair with `rad auth`.
</code></pre>
<p>As per the instructions, we need to execute the commands in my <code>.bashrc</code> file by calling <code>source</code>. This basically re-reads the file which is only read upon start-up of Bash. Alternatively, we could open a new terminal window too.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">source</span> ~/.bashrc
</code></pre>
<p>Ok so now to check that it was successful by running the version command.</p>
<pre><code class="lang-bash">rad --version
</code></pre>
<p><strong>Output:</strong></p>
<pre><code class="lang-bash">rad 1.0.0-rc.9 (d56d619f)
</code></pre>
<p><strong>Great! I'm good so far.</strong></p>
<p>Next is creating a Radicle identity aka Radicle DID (Decentralised Identifier). This is a <a target="_blank" href="https://en.wikipedia.org/wiki/Public-key_cryptography">cryptographic key pair</a>, since we're on the network, which is used to identify and authenticate me/my node. Public is for the network and everyone can see, private, as indicated, is only used for authentication of my node, signing of code and other artifacts, and shouldn't be shared publicly.</p>
<pre><code class="lang-bash">rad auth
</code></pre>
<p><strong>Output:</strong></p>
<pre><code class="lang-bash">
Initializing your radicle 👾 identity

✓ Enter your <span class="hljs-built_in">alias</span>: rxdicle-1
✓ Enter a passphrase: [REDACTED *]
✓ Creating your Ed25519 keypair...
✓ Your Radicle DID is did:key:[REDACTED]. This identifies your device. Run `rad self` to show it at all <span class="hljs-built_in">times</span>.
✓ You<span class="hljs-string">'re all set.

✗ Hint: install ssh-agent to have it fill in your passphrase for you when signing.

To create a Radicle repository, run `rad init` from a Git repository with at least one commit.
To clone a repository, run `rad clone &lt;rid&gt;`. For example, `rad clone rad:z3gqcJUoA1n9HaHKufZs5FCSGazv5` clones the Radicle '</span>heartwood<span class="hljs-string">' repository.
To get a list of all commands, run `rad`.</span>
</code></pre>
<p><em>NB: I didn't use ssh-agent, we will revisit this later.</em></p>
<blockquote>
<p>Your Radicle DID is similar to your Node ID (NID), the difference is the former is formatted as a <a target="_blank" href="https://en.wikipedia.org/wiki/Decentralized_identifier">Decentralized Identifier</a>, <a target="_blank" href="https://en.wikipedia.org/wiki/Decentralized_identifier">while the latter is jus</a>t the encoded public key. Share your Radicle DID freely with collaborators.</p>
</blockquote>
<p>You can check your rad details by running a few commands, reference below, I used the <code>rad self</code> to check mine</p>
<pre><code class="lang-bash">Alias           rxdicle-1
DID             did:key:z6Mkg3Tu7aGDn3pLrshRiaCFLQJwEHyFKTCwYFoKDHts1YV2
└╴Node ID (NID) [REDACTED]
SSH             not running
├╴Key (<span class="hljs-built_in">hash</span>)    [REDACTED]
└╴Key (full)    [REDACTED]
Home            /home/pxng0lin/.radicle
├╴Config        /home/pxng0lin/.radicle/config.json
├╴Storage       /home/pxng0lin/.radicle/storage
├╴Keys          /home/pxng0lin/.radicle/keys
└╴Node          /home/pxng0lin/.radicle/node
</code></pre>
<blockquote>
<p>Many of the other items you see in the <code>rad self</code> output can be viewed individually. Wondering about your alias? A quick <code>rad self --alias</code> has you covered. Need to pinpoint your Radicle home folder? <code>rad self --home</code> is your friend. And for your config file location, just hit up <code>rad self --config</code>.</p>
<p>If you’re ever feeling lost, <code>rad self --help</code> will lay out all your options.</p>
</blockquote>
<p><em>NB: I'm in the habit of redacting information, even if it's said to be ok for the public, call me over-cautious, I guess.</em></p>
<hr />
<h2 id="heading-node-on">Node On?</h2>
<p>A quick check on my node status, I expect it to be off since its my first install.</p>
<pre><code class="lang-bash">rad node status
</code></pre>
<p><strong>Output:</strong></p>
<pre><code class="lang-bash">Node is stopped.
To start it, run `rad node start`.
</code></pre>
<p>As expected it's not running, so we will fire it up with the provided command</p>
<p><code>rad node start</code></p>
<p><strong>Output:</strong></p>
<pre><code class="lang-bash">✓ Passphrase: 
✓ Node started (841691)
To stay <span class="hljs-keyword">in</span> sync with the network, leave the node running <span class="hljs-keyword">in</span> the background.
To learn more, run `rad node --<span class="hljs-built_in">help</span>`.
</code></pre>
<p>For extra commands, and to find out how to stop the node use <code>rad node --help</code></p>
<hr />
<h2 id="heading-starting-a-new-project">Starting a new project</h2>
<p>Moving to the Youtube video by Nader, we will now initiate a new repo, we will be using the tutorial from Updraft as my project, Fund Me. The only difference will be the replacement of Github for Radicle, everything else should be the same.</p>
<pre><code class="lang-bash">mkdir rad-foundry-fund-me-23
</code></pre>
<p>we then change directory into the project folder and initiate the Radicle project.</p>
<p>Because I'm creating a brand new project, we need to ensure have a git repo ready, so I'll initialise a git repo first, then the Radicle repo afterwards.</p>
<pre><code class="lang-bash">git init
</code></pre>
<p><strong>Output:</strong></p>
<pre><code class="lang-bash">hint: Using <span class="hljs-string">'master'</span> as the name <span class="hljs-keyword">for</span> the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use <span class="hljs-keyword">in</span> all
hint: of your new repositories, <span class="hljs-built_in">which</span> will suppress this warning, call:
hint: 
hint:   git config --global init.defaultBranch &lt;name&gt;
hint: 
hint: Names commonly chosen instead of <span class="hljs-string">'master'</span> are <span class="hljs-string">'main'</span>, <span class="hljs-string">'trunk'</span> and
hint: <span class="hljs-string">'development'</span>. The just-created branch can be renamed via this <span class="hljs-built_in">command</span>:
hint: 
hint:   git branch -m &lt;name&gt;
Initialized empty Git repository <span class="hljs-keyword">in</span> /home/pxng0lin/web3/projects/radicle_xyz/rad-foundry-fund-me/.git/
</code></pre>
<p>The Radicle repo requires at least 1 commit to be done, so I'll create a README.md file and commit this. I changed the initial branch name to 'main', this was only due to preference and not a requirement.</p>
<pre><code class="lang-bash">git branch -m main
touch README.md
git add README.md
git status
</code></pre>
<p><strong>Output:</strong></p>
<pre><code class="lang-bash">On branch main

No commits yet

Changes to be committed:
  (use <span class="hljs-string">"git rm --cached &lt;file&gt;..."</span> to unstage)
        new file:   README.md
</code></pre>
<p>We will commit this using the command <code>git commit</code>. It will ask for a comment to be added for the commit, then press <code>ctrl+o</code> to save, followed by <code>ctrl+x</code>, and we are good; now I'm ready to initialise the Radicle repo.</p>
<p><strong>Output:</strong></p>
<pre><code class="lang-bash">[main (root-commit) 9ff86ae] Creation of README.md file only
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 README.md
</code></pre>
<p>Initialising the new Radicle repo.</p>
<pre><code class="lang-bash">rad init
</code></pre>
<p><em>TIP: to clear any entered commands and returned outputs from your terminal, use either</em><code>ctrl+l</code><em>(ell) or type</em><code>clear</code><em>, I like to clean up the screen from time-to-time.</em></p>
<p>So now we need to enter some information about the new project for the repo.</p>
<pre><code class="lang-bash">✓ Name foundry-fund-me
✓ Description A tutorial project <span class="hljs-keyword">in</span> the Foundry 101 course on Cyfrin to learn how to professionally deploy code, master the art of creating fantastic tests, and gain insights into advanced debugging techniques.
✓ Default branch main
✓ Visibility public
✓ Passphrase: 
✓ Unsealing key...
✓ Repository foundry-fund-me created.

Your Repository ID (RID) is rad:z43pr3L72n8wT74KSHcty9fEY5JaL.
You can show it any time by running `rad .` from this directory.

◢ Upload <span class="hljs-keyword">done</span> <span class="hljs-keyword">for</span> rad:[REDACTED] to [REDACTED] : signal: 9 (SIGKILL)✓ Repository successfully synced to [REDACTED] 
◢ Uploading rad:[REDACTED] to [REDACTED] Compressing objects: 100% (8/8)✓ Repository successfully synced to [REDACTED] 
✓ Repository successfully synced to 2 node(s).

Your repository has been synced to the network and is now discoverable by peers.
View it <span class="hljs-keyword">in</span> your browser at:

    https://app.radicle.xyz/nodes/seed.radicle.garden/rad:z43pr3L72n8wT74KSHcty9fEY5JaL

To push changes, run `git push`.
</code></pre>
<p><strong>And we're off!</strong></p>
<p>So the setup is done, and I'm ready to begin the development stage. this will be the follow along as per the course. I'll end it here for today, and begin a new article as I complete the stages of the course as not to make this too long of a post.</p>
]]></content:encoded></item><item><title><![CDATA[pxng0lin@laptop:~$ echo $AboutMe]]></title><description><![CDATA[Who Am I?

pseudonym: pxng0lin (pangolin)

location: United Kingdom

interests: data, learning, privacy, web3, independence from 9-5 (don't we all yearn for that?)


Some deets.

profession: Previously an analyst (data, forecasting, resource), all th...]]></description><link>https://unchained.pxng0lin.xyz/pxng0linlaptop-echo-aboutme</link><guid isPermaLink="true">https://unchained.pxng0lin.xyz/pxng0linlaptop-echo-aboutme</guid><category><![CDATA[aboutme]]></category><category><![CDATA[web3.0]]></category><category><![CDATA[Blockchain]]></category><category><![CDATA[#cybersecurity]]></category><category><![CDATA[introduction]]></category><dc:creator><![CDATA[Isa]]></dc:creator><pubDate>Fri, 24 May 2024 01:57:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1716517976780/c3b541a3-b44d-4ed4-b44a-31ac38bfa901.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-who-am-i">Who Am I?</h2>
<ul>
<li><p><strong>pseudonym:</strong> pxng0lin (pangolin)</p>
</li>
<li><p><strong>location:</strong> United Kingdom</p>
</li>
<li><p><strong>interests:</strong> data, learning, privacy, web3, independence from 9-5 (don't we all yearn for that?)</p>
</li>
</ul>
<h2 id="heading-some-deets">Some deets.</h2>
<ul>
<li><p><strong>profession:</strong> Previously an analyst (data, forecasting, resource), all things data, forecasting resource and modelling used for predictions, over the past 4 years I've transitioned into cybersecurity and reignited my love of hacking and transferred my problem solving skills, and all things data over.</p>
</li>
<li><p><strong>experience:</strong> Mainly in data for analytics, from insight to forecasting resource, modelling and prediction of customer data, etc., this was all before the 'dream' that is now AI.</p>
<ul>
<li><p><strong>Web2:</strong> I started to dabble in hacking several years back and utilised platforms like Bugcrowd, Intigriti and HackerOne, great platforms still today. Earned some rep, learnt a lot more than I reported, built a lot of scripts in Bash and Python, and realised quite quickly, there's too many hackooors in that space, and knowledge is a scramble to get quickly, research takes a while too, especially to get to a top level, not for all, but for me and my circumstances, I didn't have the time (other responsibilities).</p>
</li>
<li><p><strong>Web3:</strong> I discovered <a target="_blank" href="https://immunefi.com">Immunefi</a> c. 2020, when looking at crypto and thinking at the time, "can this stuff be hacked?". I joined their Discord server and saw conversations around using <a target="_blank" href="https://github.com/crytic/slither">Slither</a> to detect vulnerabilities in smart contracts and talk about false-positives being high in the results (smart contracts? What's one of them?). I saw this and cloned the Slither tool from Github, ran this over a mainnet smart contract from Etherscan (shh... naughty!) and then the results showed, what I thought, were valid vulnerabilities - "Oh great, just like using web2 tools, such as <a target="_blank" href="https://github.com/projectdiscovery/nuclei">Nuclei</a>..", were not <em>really,</em> and it wasn't that simple!</p>
</li>
<li><p><strong>languages speaking/coding:</strong> I'm not expert in any, I have used/use several, and continue to learn, in no particular order</p>
<ul>
<li><p>VBA</p>
</li>
<li><p>SQL</p>
</li>
<li><p>Python</p>
</li>
<li><p>R</p>
</li>
<li><p>Bash</p>
</li>
<li><p>Solidity</p>
</li>
<li><p>Javascript</p>
</li>
<li><p>English (Native, British)</p>
</li>
<li><p>Arabic (Fus-ha)</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
    <div data-node-type="callout">
    <div data-node-type="callout-emoji">💬</div>
    <div data-node-type="callout-text">As you can see, I don't share much personal information. I want my internet footprint to be minimal on my personal life. Rather, an output of my interests, things I do, and interacting with others that have similar interests to enable learning and discovery.</div>
    </div>


<h2 id="heading-ambitions">Ambitions</h2>
<p>I remember a while back looking at my options and potential paths for the web3 space, in opposition to my current career in cybersecurity, which is web2 mainly, and focusses on management of assets within a business regarding vulnerabilities. I came across the <a target="_blank" href="https://github.com/spearbit/proposals/discussions/3">Spearbit's Github</a> repo that broke down the different levels of the positions they have, and this gave me a starting point of where to aim, for what I consider, entry level.</p>
<blockquote>
<ul>
<li><p>Spearbit Roles</p>
<ul>
<li><p>Junior Security Researcher (JSR)</p>
</li>
<li><p>Associate Security Researcher (ASR)</p>
</li>
<li><p>Security Researcher (SR)</p>
</li>
<li><p>Lead Security Researcher (LSR)</p>
</li>
</ul>
</li>
<li><p>Promotion Flow</p>
<ul>
<li><p>JSR to ASR</p>
</li>
<li><p>ASR to SR</p>
</li>
<li><p>SR to LSR</p>
</li>
</ul>
</li>
</ul>
</blockquote>
<p>My aim was to become proficient enough in this space to at least be considered a JSR. In doing so, I would then be in a position to apply for the role with places like Spearbit, or at least have a reference of my level for other opportunities they may arise.</p>
<p>Fast forward a little, the aim is still to be proficient, but, after experiencing audit competitions, the maturity of vulnerabilities and the shift of the severity from Critical to a low in under a year, I started to believe just meeting the expectation <em>would</em> be suitable; but wouldn't separate me from the rest.</p>
<p>After a Twitter/X post from <a target="_blank" href="https://hashnode.com/@dacian">Dacian</a> (great chap!) for a role of <a target="_blank" href="https://x.com/DevDacian/status/1783435846806560919">LSR at Cyfrin</a> (not applying, lol), I started to have a think about what I needed, and how I could prove it to potential employees. I dropped Dacian a DM too just to get a bit of insight into what I could do, without becoming a content creatooor all over Twitter/X, I'm not one for that sort of attention, nor did I really enjoy reading threads of regurgitated knowledge, good for those that have done it, but as many things do, it gets old and "seen it before" quite quickly. You could argue the same is with blogs, however, for me, this is an easier way to point someone to what you have been doing, what you know, and how you've applied what you've learnt; without having to link to tweets lost in your Twitter/X history.</p>
<p>So to wrap up! I want to use this space to share learnings from competitions, vulnerabilities I've reported and/or read about, and anything web3 or coding related that I do as I venture deeper into security research - emphasis on research, since this really is my main interest, I research, I try, I win and fail, but in general, I have a passion for learning things, problem solving, and usually sharing amongst family and friends (sometimes to blank faces and "that's nice" smiles), but hopefully now I can widen my reach to a bigger audience, and put myself in the window for the next role that comes up in the future.</p>
]]></content:encoded></item></channel></rss>