index.html

Binh Nguyen's Home Page

Email: dtbnguyen(at)gmail(dot)com

Website: http://sites.google.com/site/dtbnguyen/

Blog: http://dtbnguyen.blogspot.com/

LinkedIn: https://www.linkedin.com/in/dtbnguyen

GitHub: https://github.com/dtbnguyen

SoundCloud: https://soundcloud.com/dtbnguyen

Instagram: https://www.instagram.com/dtbnguyen2018/

YouTube: https://www.youtube.com/channel/UCwVJG67iHHPbmBxuHVbyOlw/playlists

Scribd: https://www.scribd.com/dtbnguyen

Academia: https://independent.academia.edu/BinhNguyen100

Academic Room: http://www.academicroom.com/users/dtbnguyen

Facebook: http://www.facebook.com/dtbnguyen1234

Tumblr: http://dtbnguyen.tumblr.com/

Twitter: https://twitter.com/dtbnguyen

Lastfm: http://www.last.fm/user/dtbnguyen

OzBargain: https://www.ozbargain.com.au/user/156239

Whirlpool: http://forums.whirlpool.net.au/user/690316

Beatport: http://dj.beatport.com/dtbnguyen

Coursera: https://www.coursera.org/user/i/598b31ea3113438c624cd0ad4cf97618

Individurls: http://individurls.com/myfeeds/dtbnguyen/

AboutMe: https://about.me/dtbnguyen

This page is rarely and irregularly updated.

Short Biography

Binh Nguyen was born in Melbourne, Australia. He has multiple undergraduate and postgraduate degrees in multiple fields but his primary interest remains in the IT sector.

His background is strongly biased towards science and mathematics. Nonetheless, he does have an appreciation for the arts, humanities and sport. He has broad experience in IT and a number of his product concepts have been commercialised.

Some of his technical documents have been incorporated into the Linux Documentation Project ("Linux Dictionary" and "Linux Filesystem Hierarchy", www.tldp.org/guides.html) Furthermore, they are being used as reference books in many Universities, Colleges and also for professional certification purposes around the world.

Moreover, as can be demonstrated below, his articles and documents have been published in both industry and enthusiast publications, submitted to government, and he has a strong interest in the latest innovations in science and technology.

For samples of his work and an indication of his interests please see his website, blog, etc...

Personal Blog

http://dtbnguyen.blogspot.com/

Sample Citations

Some subjects/courses where his documents are used

"CSC2408 Software Development Tools - University of Southern Queensland", http://www.sci.usq.edu.au/courses/csc2408/semester2/resources/ldp/index.html

"CSC3412 System and Security Administration - University of Southern Queensland", http://www.sci.usq.edu.au/courses/CSC3412/resources/dvdrom/index.html

"CSC3412 - Allan Hancock College", http://www.coursehero.com/sitemap/schools/223-Allan-Hancock-College/courses/761192-CSC3412/

"CIS170 - Introduction to UNIX - Kishwaukee College", http://www.kishwaukeecollege.edu/faculty/dklick/cis170/

"System Administration/Operation - Universitat Politècnica de Catalunya", http://studies.ac.upc.edu/FIB/ASO/llibres.html

"File System Management - Politeknik Elektronika Negeri Surabaya", http://student.eepis-its.edu/~izankboy/laporan/adminlinuxpdf/Manajemen%20Sistem%20Fileppt.pdf

"Design and System and Network Administration - Universidad Rey Juan Carlos", http://gsyc.escet.urjc.es/~cespedes/dasr/

"Operating Systems - James Madison University", https://users.cs.jmu.edu/abzugcx/public/Operating-Systems/Assignments.pdf

"CompTIA Linux Course", http://www.coursehero.com/textbooks/44858-Getting-Started-with-Linux-Novells-Guide-to-CompTIAs-Linux-Course-3060/

"Making the Transition to Linux: A Guide to the Linux Command Line Interface for Students - Washington University", http://clusters.engineering.wustl.edu/guide/guide/more_on_linux.html

"Universidad de Cádiz", http://www.uca.es/centro/1C11/wuca_fichasig_todasasig_xtitulacion?titul=1711

"SUNWAH PerL LINUX - Training and Development Centre", http://web.archive.org/web/20090604053516/http://www.swpearl.com/eng/scripts/dictionary/

"Notre Dame University", http://ndustudents.com/index.php?dir=Tutorials%2FLINUX+TUTORIAL%2FSome+Unix+Tutorial%2FUnix-PDF-Tutorials%2F&download=Linux-Filesystem-Hierarchy.pdf

"University of Southern Floridia - Polytechnic", http://softice.poly.usf.edu/wiki/index.php/ELSA:lab-03

"Lac Hong University", http://elib.lhu.edu.vn/handle/123456789/5040

Some work where his documents have been cited

"Development of a robot-based magnetic flux leakage inspection system", http://scidok.sulb.uni-saarland.de/volltexte/2011/4420/pdf/Dissertation_Yunlai_Li.pdf

"Remote Access Forensics for VNC and RDP on Windows Platform", http://ro.ecu.edu.au/cgi/viewcontent.cgi?article=1007&context=theses_hons

"A Scattered Hidden File System", http://www.h.kyoto-u.ac.jp/staff/extra/142_hioki_h_0_next/research/DH/files/ashfs_steg04.pdf

"Long-Term Operating System Maintenance", http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA479294

"Techniques for the Abstraction of System Call Traces to Facilitate the Understanding of the Behavioural Aspects of the Linux Kernel", http://spectrum.library.concordia.ca/7075/1/Fadel_MASc_S2011.pdf

"Hosting A Server for Education Programming of Web Applications at EPI Ltd.",

http://edice.vos.cz/files/swf/3150_bc_brace_supa_ANG_2010.htm

"Dissertation on Smart Systems", https://wiki.abertay.ac.uk/display/~0400653/Dissertation

"File System Management", http://student.eepis-its.edu/~izankboy/laporan/adminlinuxpdf/4%20sistem%20managementfile.pdf

"Master's Theses and Project Reports", http://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=1600&context=theses

"Project Report", http://www.pages.drexel.edu/~nsk62/files/Project_report.pdf

Some work that he has reviewed

"Cryptoloop loop device encryption HOWTO", http://www.tldp.org/HOWTO/html_single/Cryptoloop-HOWTO/#credits

"Mainframe Simulation HOWTO", http://www.tldp.org/HOWTO/html_single/Mock-Mainframe/#AEN31

Sample Technology Documents

Please note that the 'Cloud and Internet Security' report is now only available via Amazon and Google Play Book stores.

https://play.google.com/store/books/author?id=Binh+Nguyen

http://www.amazon.com/mn/search/?_encoding=UTF8&camp=1789&creative=390957&field-author=Binh%20Nguyen&linkCode=ur2&search-alias=digital-text&sort=relevancerank&tag=bnsb-20&linkId=3BWQJUK2RCDNUGFY

Cloud and Internet Security

ABSTRACT

A while back I wrote two documents called 'Building a Cloud Service' and the 'Convergence Report'. They basically documented my past experiences and detailed some of the issues that a cloud company may face as it is being built and run. Based on what had transpired since, a lot of the concepts mentioned in that particular document are becoming widely adopted and/or are trending towards them. This is a continuation of that particular document and will attempt to analyse the issues that are faced as we move towards the cloud especially with regards to security. Once again, we will use past experience, research, as well as current events trends in order to write this particular report.

...

No illicit activity (as far as I know and have researched) was conducted during the formulation of this particular document. All information was obtained only from publicly available resources and any information or concepts that are likely to be troubling has been redacted. Any relevant vulnerabilities or flaws that were found were reported to the relevant entities in question (months have passed).

Please note that the 'Convergence Effect' report is now only available via Amazon and Google Play Book stores.

https://play.google.com/store/books/author?id=Binh+Nguyen

http://www.amazon.com/mn/search/?_encoding=UTF8&camp=1789&creative=390957&field-author=Binh%20Nguyen&linkCode=ur2&search-alias=digital-text&sort=relevancerank&tag=bnsb-20&linkId=3BWQJUK2RCDNUGFY

Convergence Effect

ABSTRACT

A while back I wrote a document called "Building a Cloud Service". It was basically a document detailing my past experiences and details some of the issues that a cloud company may face as it is being built and run. Based on what had transpired since, a lot of the concepts mentioned in that particular document are becoming widely adopted and/or are trending towards them. This is a continuation of that particular document and will attempt to analyse the issues that are faced as we move towards the cloud especially with regards to to media and IT convergence. Once again, we will use past experience, research, as well as current events trends in order to write this particular report. I hope that this document will prove to be equally useful and will provide an insight not only to the current state of affairs but will provide a blueprint for those who may be entering the sector as well as those who may be using resources/services from this particular sector. Please note that this document has gone through many revisions and drafts may have gone out over time. As such, there will be concepts that may have been picked up and adopted by some organisations (as was the case with the "Cloud" document with several technologies) while others may have simply broken cover while this document was being drafted and sent out for comment. It also has a more strategic/business slant when compared to the original document which was more technically orientated. No illicit activity (as far as I know and have researched) was conducted during the formulation of this particular document. All information was obtained only from publicly available resources and any information or concepts that are likely to be troubling has been redacted. Any relevant vulnerabilities or flaws that were found were reported to the relevant entities in question (months have passed).

Please note that the 'Building a Cloud Computing Service' report is now only available via Amazon and Google Play Book stores.

https://play.google.com/store/books/author?id=Binh+Nguyen

http://www.amazon.com/mn/search/?_encoding=UTF8&camp=1789&creative=390957&field-author=Binh%20Nguyen&linkCode=ur2&search-alias=digital-text&sort=relevancerank&tag=bnsb-20&linkId=3BWQJUK2RCDNUGFY

Building a Cloud Computing Service

ABSTRACT

As the world moves towards a more globally and electronically connected future, access to the Internet is becoming more commonplace for business, educational, as well as entertainment purposes. Virtually everyone now has a small, mobile device of some sort which will allow them access to the Internet. The concept of "Cloud Computing" was born as a direct consequence of such connectivity and this has resulted in services advancing towards the Internet "Cloud". This allows smaller devices to possess far greater functionality than ever before whether it is via websites and/or other secondary protocols. This document provides advice on how to build a cloud service whether that may be for commercial, educational, and/or more altruistic purposes. It is based on past experience, general knowledge, as well as personal research. It is not intended to be read by people who are new to computing. While it was originally intended only to cover technical aspects of building a cloud service-based company it has since expanded into a document that covers the actual business aspects of building a cloud service-based company as well. It uses Open Source technologies, but takes concepts from all fields.

Linux Filesystem Hierarchy Document

ABSTRACT

This document outlines the set of requirements and guidelines for file and directory placement under the Linux operating system according to those of the FSSTND v2.3 final (January 29, 2004) and also its actual implementation on an arbitrary system. It is meant to be accessible to all members of the Linux community, be distribution independent and is intended to discuss the impact of the FSSTND and how it has managed to increase the efficiency of support interoperability of applications, system administration tools, development tools, and scripts as well as greater uniformity of documentation for these systems.

Linux Dictionary

ABSTRACT

This document is designed to be a resource for those Linux users wishing to seek clarification on Linux/UNIX/POSIX related terms and jargon. At approximately 24700 definitions and two thousand pages it is one of the largest Linux related dictionaries currently available. Due to the rapid rate at which new terms are being created it has been decided that this will be an active project. We welcome input into the content of this document. At this moment in time half yearly updates are being envisaged.

Interview/Article with Brazillian 'Geek' Magazine

Extracts from an article that was written about the Linux Dictionary.

Computer Dictionary

ABSTRACT

The Computer Dictionary is an open source project, released under terms of the Creative Commons ShareAlike 2.0 license, that aims to develop a Docbook XML glossary database containing definitions of computing nomenclature. The primary application for the source is realized in context of Docbook XML-based publishing systems. However, as a desired side-effect, the glossary is also available online as a 'browsable' reference. What makes this project unique is that it is the first free and open glossary database to be developed specifically for use with Docbook XML-based publishing systems.

Sample Scripts and Computer Programs

get_country_satellite_links-1.00.zip

https://dtbnguyen.blogspot.com/2023/01/get-country-satellite-map-links-script.html

GET COUNTRY SATELLITE MAP LINKS SCRIPT

I've been working on some projects that require access to satellite imagery. What I didn't realise is that there isn't really a menu of links that you can click on to see what each country looks like so I created one. Not all links work perfectly because they were automatically generated obviously. The context of this work will make more sense as time progresses.

Obviously, to make the most of this script you'll need to be able to understand how to script/program.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

get_lawanddisorder_podcast-1.03.zip

https://dtbnguyen.blogspot.com/2022/07/law-and-disorder-downloader-script_27.html

LAW AND DISORDER DOWNLOADER SCRIPT

There are often websites that don't interact well with download managers so you sometimes have to build a custom script to deal with it. This is a custom custom downloader script for MP3 podcasts for:

https://lawanddisorder.org/

Obviously, to make the most of this script you'll need to be able to understand how to script/program.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

wikipedia_people_list_correlation_checker-1.04.zip

https://dtbnguyen.blogspot.com/2022/02/wikipedia-people-list-correlation_2.html

WIKIPEDIA PEOPLE LIST CORRELATION CHECKER SCRIPT

After noticing similarities/correlations in individuals/groups/countries who do well/not so well economically I decided to drill down a bit deeper by looking for cross correlations between high net work individuals (the wealthiest people in the world. This pertains almost exclusively to the billionaire class). I also looked at data for other people as you'll see by the functions/lists that I looked at. If I could find data for other classes of people I'd look at it but this data isn't as easy to find and would likely result in privacy issues. This script is one of the consequences of my continued research into the nature of the economic system/s in play:

http://dtbnguyen.blogspot.com/2022/01/country-gdp-growth-correlation-checker.html

https://dtbnguyen.blogspot.com/2021/12/not-much-has-changed-from.html

https://dtbnguyen.blogspot.com/2021/11/what-people-eatate-random-stuff-and-more.html

https://dtbnguyen.blogspot.com/2021/11/abs-postcode-research-script-random.html

https://dtbnguyen.blogspot.com/2021/09/wikipedia-list-data-scan-pack-random.html

http://dtbnguyen.blogspot.com/2021/01/how-elite-maintain-power-random-stuff.html

I could only include a small subset of test data (and no scraped data) as the amount of data that gets scraped scraped is pretty large (several hundred MB per list is common) and could only partially test due to time constraints. If you're honest with yourself this is the type of script/program you leave to run on your computer for days until it finishes or run on/off whenever your computer has spare processing time.

Pretty interesting nonetheless if you understand how to interpret and use the data (you'll get some interesting URLs to look at even if you don't look at cross correlations).

Obviously, to make the most of this script you'll need to be able to understand how to script/program.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

country_gdp_growth_correlation_checker-1.03.zip

https://dtbnguyen.blogspot.com/2022/01/country-gdp-growth-correlation-checker.html

COUNTRY GDP GROWTH CORRELATION CHECKER SCRIPT

After noticing similarities/correlations in individuals/groups/countries who do well/not so well economically I decided to drill down a bit deeper by looking for cross correlations between high economic growth and general economic policy/history. This script is one of the consequences of such research:

https://dtbnguyen.blogspot.com/2021/12/not-much-has-changed-from.html

https://dtbnguyen.blogspot.com/2021/11/what-people-eatate-random-stuff-and-more.html

https://dtbnguyen.blogspot.com/2021/11/abs-postcode-research-script-random.html

https://dtbnguyen.blogspot.com/2021/09/wikipedia-list-data-scan-pack-random.html

http://dtbnguyen.blogspot.com/2021/01/how-elite-maintain-power-random-stuff.html

Obviously, to make the most of this script you'll need to be able to understand how to script/program.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

postcode_research-1.06.zip

https://dtbnguyen.blogspot.com/2021/11/abs-postcode-research-script-random.html

ABS POSTCODE RESEARCH SCRIPT

This script helps enable postcode based research based on ABS data in Australia. Uncomment the relevant lines of code and and run the script in the proper sequence. This script assumes you know how to code.

Note, that data processing can take a lot of time and that the final file is so large that it may crash a lot of modern spreadsheet programs. Most of the time, I query this final file using basic BASH code.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

wikipedia_list_data_scan_pack-1.02.zip

https://dtbnguyen.blogspot.com/2021/09/wikipedia-list-data-scan-pack-random.html

WIKIPEDIA LIST DATA SCAN PACK

HOW TO RUN/USAGE INSTRUCTIONS

1) Please read through entire README file. Check comments in both scripts to understand what they do and how they work

2) Modify and run wikipedia_list_data_scan.sh

3) Run mine_wikipedia_data.sh against relevant target cache directory or on it's own with no parameters to process all files in cache directory

++++

wikipedia_list_data_scan.sh

Like others I like to study societal structures and how they work, funnel/choke points, commonalities, etc... This script is designed to help facilitate research into this by downloading relevant articles from Wikipedia lists and looking for relevant common words across people's profiles.

Relevant links to understand the code are the following:

https://en.wikipedia.org/wiki/Lists_of_people_by_occupation

https://en.wikipedia.org/wiki/The_World%27s_Billionaires

https://en.wikipedia.org/wiki/Lists_of_actors

I built this after noticing a lot of points of commonality between well known actors, politicians, scientists, billionaires, etc...

It's obviously pretty simple and isn't supposed to be taken too seriously. Drop to the bottom and decomment/comment the relevant lines and run script to make it work.

You'll need to modify it to focus in on specific types of people and to be honest I was only really interested and had time to analyse a small set of people. I've included only a small subset of keyword commonalities files but haven't included cache files due to their size.

I took code and ideas from older scripts to speed up development:

https://dtbnguyen.blogspot.com/2020/02/dnsamazon-s3githubblogspotwordpress.html

https://dtbnguyen.blogspot.com/2020/02/web-server-global-sampling.html

https://dtbnguyen.blogspot.com/2020/01/dnsamazon-s3github-enumeration-pack.html

https://dtbnguyen.blogspot.com/2018/03/whitepaper-examine-script-random-stuff.html

https://dtbnguyen.blogspot.com/2017/04/news-feed-bias-checker-random-stuff-and.html

https://dtbnguyen.blogspot.com/2017/04/news-bias-checker-2-random-stuff-and.html

https://dtbnguyen.blogspot.com/2017/05/news-homepage-bias-check-random-stuff.html

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

++++

mine_wikipedia_data.sh

This script is designed to be run against a target directory containing cache files after wikipedia_list_data_scan.sh is run or else can be run with no arguments to process all files in the cache directory.

Note, that even on small lists of people data processing can take an unexpectedly long time to process. You're better off running this script overnight for larger samples. I would have added high performance computing/grid style processing capabilities if this were a more high priority project. You're stuck with what this is for the time being.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

rename_open_uni_certs-1.00.zip

https://dtbnguyen.blogspot.com/2020/12/open-university-certificate-renaming.html

RENAME OPEN UNI CERTS SCRIPT

Once you're done getting free certificates from Open University you may like to rename them to something more reasonable (as opposed to something more akin to SK195_5_statement.pdf). This script takes PDF certification complete files from your personal achievement section (viewable when you login) and then renames them to something that is more sensible to the human mind.

Use a browser based download manager (such as DownloadThemAll) if you have many certificates to download. This means you don't have to figure out how to deal with bypassing authentication issues on the CLI.

https://www.open.edu/openlearn/profiles/zy934084/achievements

https://www.open.edu/openlearn/profiles/zy194210/achievements

https://www.open.edu/openlearn/profiles/zy513282/achievements

https://www.open.edu/openlearn/free-courses/full-catalogue

http://dtbnguyen.blogspot.com/2020/08/corbett-report-podcast-downloader.html

http://dtbnguyen.blogspot.com/2020/11/get-free-open-university-course-scripts_3.html

As this is the very first version of the program it may be VERY buggy).

get_free_open_uni_pdfs.sh-1.03.zip

https://dtbnguyen.blogspot.com/2020/11/get-free-open-university-course-scripts_3.html

GET FREE OPEN UNI PDF SCRIPT

This script downloads course PDF files from:

https://www.open.edu/openlearn/free-courses/full-catalogue

While it works you'll need to login and do assessments to get a certificate though. Parts of the script looks more complex then they actually are. Just go through the commented code and it'll make more sense.

As this is the very first version of the program it may be VERY buggy).

get_corbett_podcasts-1.02.zip

https://dtbnguyen.blogspot.com/2020/08/corbett-report-podcast-downloader.html

GET CORBETT PODCASTS SCRIPT

There are often websites that don't interact well with download managers so you sometimes have to build a custom script to deal with it. This is a custom custom downloader script for MP3 podcasts for:

https://www.corbettreport.com

As this is the very first version of the program it may be VERY buggy).

seclist_generator-1.01.zip

https://dtbnguyen.blogspot.com/2020/02/seclist-generator-random-stuff-and-more.html

SECLIST GENERATOR

I just wanted to see what certain seclist generation tools were like. These tools basically take a list of words and generate various permutations from them for use by high speed password cracking software.

It's obvious that if you were to run these style of seclists against systems they would work but they would be really slow and impracticle. I checked the try rate against a test router (D-Link DSL-502T) that was being thrown out and ended up with ~800.00 tries/min. Obviously, in the modern age of rich websites this activity may actually be drowned out in high traffic networks.

https://gtmetrix.com/reports/www.google.com/XPoWxKQE

https://gtmetrix.com/reports/www.ibm.com/IwoT0RoF

https://gtmetrix.com/reports/www.redhat.com/aeoyoLFm

The relative slowness explains why most botnets and automated hacking systems brute force a tiny fraction of user credentials. This method is highly impracticle but useful to know if your users have been lazy with password complexity and have used variations of words rather then genuinely random strings. Relatively useless against well guarded networks with watchful staff. That said, it would be interesting to how well such activity gets picked up by logging on your current network.

253823 ciml-v0_99-all.john

20553 ciml-v0_99-all.pdf

15471 ciml-v0_99-all.txt

5134 ciml-v0_99-all.wordlist

As this is the very first version of the program it may be VERY buggy). Please test prior to deployment in a production environment.

enumeration_pack-1.02.zip

https://dtbnguyen.blogspot.com/2020/02/dnsamazon-s3githubblogspotwordpress.html

DNS/AMAZON S3/GITHUB/BLOGPOST/WORDPRESS EUMERATION PACK

This is an enumeration "software pack" for DNS, Amazon S3, Github, Blogspot, and Wordpress. It obviously builds on my subdomain_resolve.sh script (which was only designed for DNS).

To enumerate a DNS domain run the relevant script with a wordlist/seclist.

To enumerate Amazon S3 first enumerate against s3.amazonaws.com via subdomain_resolve.sh Then use aws_s3_enum.sh against a relevant s3_amazonaws_com-*-results.txt file from the results folder.

To enumerate against Github run github_enum.sh against a relevant wordlist/seclist.

To enumerate against Blogspot run blogspot_enum.sh against a relevant wordlist/seclist.

To enumerate against Wordpress run wordpress_enum.sh against a relevant wordlist/seclist.

I obviously thought about using a more generalised script but realised that it wouldn't work across the board. Naming systems often doesn't work across all websites and it's easy to create new enumerators by simply substituting the correct parameters so I'll leave individual scripts for the time being.

These scripts are obviously very simple but they will give you a good idea into how similar tools work but in a simpler framework. They're also pretty harmless because all they really do is look for a website/webpage and download that page if and when it's available.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

webscan-1.01.zip

https://dtbnguyen.blogspot.com/2020/02/web-server-global-sampling.html

WEBSCAN

I just wanted to see what the results would be if I did a tiny sample of the Internet what would be out there with regards to websites and whether the Internet could survive if DNS capability being taken out.

I thought I'd end up with a lot of legitimate websites (like in my DNS, Github, Amazon AWS S3 bucket enumeration experiments) but it ended being more like the results from so called security search engines:

https://www.shodan.io/

https://www.binaryedge.io/

Just random stuff is out there. A lot of it unconfigured, old, misconfigured, unpatched, etc...

I didn't actually scan the entire Internet. I realised pretty early on that if I tried that the process it would likely last months (even if I optimised it and ran it again it probably wouldn't make much of a difference because there are many issues at play including network connections that need to be made, total scans that need to be done, download quotas, etc...). At most, I looked at a few hundred servers per country.

Anyhow, this is the source code if you're interested. It's obviously very similar to be primitive hybrid enumerator/web crawler. It can easily be converted to something like Shodan or Binary Edge or to monitor/audit your own network as well. This will make more sense if as I work on other projects or as your experience grows. There's some randominsation thrown in to make things look less strange to monitoring systems.

The source code as released obviously doesn't do anything. It's obvious that you need to uncomment, run things in the correct sequence, and modify in many of the right places for it to do anything significant (block against script kiddies). As a side note, you need to make significant changes for it to be used in an offensive capacity. It's primary use is for research/study.

As this is the very first version of the program it may be VERY buggy). Please test prior to deployment in a production environment.

enumeration_pack-1.01.zip

https://dtbnguyen.blogspot.com/2020/01/dnsamazon-s3github-enumeration-pack.html

DNS/AMAZON S3/GITHUB EUMERATION PACK

This is an enumeration "software pack" for DNS, Amazon S3, and Github and obviously builds on my subdomain_resolve.sh script (which was only designed for DNS).

To enumerate a DNS domain run the relevant script with a wordlist/seclist. To enumerate Amazon S3 first enumerate against s3.amazonaws.com via subdomain_resolve.sh Then use aws_s3_enum.sh against a relevant s3_amazonaws_com-*-results.txt file from the results folder.

To enumerate against Github run github_enum.sh against a relevant worldlist/seclist.

These scripts are obviously very simple but they will give you a good idea into how similar tools work but in a simpler framework. They're also pretty harmless because all they really do is look for a website/webpage and download that page if and when it's available.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

news_vix-1.02.zip

https://dtbnguyen.blogspot.com/2019/12/news-vix-script-jim-simonsed-thorp.html

NEWS VIX SCRIPT

This script is an iteration of my news_page_bias.sh, news_homepage_bias.sh, news_feed_bias.sh, and planet_check.sh scripts. It's designed to check for financial news sentiment regarding a particular sector by examining news feeds and homepages. I've built a custom solution primarily because I want to avoid the issue of fake news feeds, fake social media accounts, etc... which can lead to improper representations of reality especially as many of them often create perception bubbles. By taking a look at many different factors I hope to be able to better guage market sentiment.

Obviously, it's pretty rudimentary and reads feeds included via the first parameter. Add feeds as you want. Comment out newsfeeds that are irrelevant using the "#" symbol like in Python and BASH (some feeds aren't really possible to check because of their structure or you can't get a decent gauge of bias because the size of the feeds vary drastically).

It's not supposed to be taken too seriously (though I may write something more relevant later on?).

I've been very surprised/perplexed by some of the results (a good example of this is the following. A lot of websites that don't look biased seem to be while others that seem more neutral?) That said, since it's doing the check on a very small sample that often differs from site to site which makes adequate quantification of bias very difficult.

As this is the very first version of the program it may be VERY buggy).

tradesim_quote_converter-1.00.zip

TradeSim-Files.zip

https://dtbnguyen.blogspot.com/2019/12/tradesim-financial-trading-quote.html

TRADESIM QUOTE CONVERTER SCRIPT

I found a financial trading game/simulator called TradeSim:

https://sourceforge.net/projects/trading-simulator/

that had a very similar file format as that used for data from the following website:

https://www.asxhistoricaldata.com/archive/

https://www.asxhistoricaldata.com/

and which is also used by my asx_analyser.sh script. I decided to build a script to create suitable quote files for use with TradeSim so I could practice my trading skills. To use it download and unzip relevant archive files. Then run this script. The quote files that can be used with TradeSim will be dumped to the TradSim-Files folder.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

asx_analyser-1.02.zip

https://dtbnguyen.blogspot.com/2019/12/asx-share-analyser-script-random-stuff.html

ASX SHARE ANALYSER SCRIPT

I wanted a simple, basic ASX share analyser for files from:

https://www.asxhistoricaldata.com/archive/

https://www.asxhistoricaldata.com/

This script does only simple statistical analysis and should only be used for a small set of files. Processing over a period of years of ASX history can take hours of processing on a mid range computer. Ideal situation is for things to be done via suitable backend such as a Database or Big Data solution.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

mini_search_engine-1.03.zip

https://dtbnguyen.blogspot.com/2019/11/mini-search-engine-prototype-random.html

MINI SEARCH ENGINE SCRIPT

I've worked on search engine technology in the past. I've never really had to build it from scratch though. This is a basic prototype. It's close to useless because of it's simplicity but it's very useful if you want to learn the fundamentals of what makes up a search engine. It's intended to help me in my work on building a medical search engine (and possibly several other projects which require this style of functionality) later down the line.

Am aware that there is a lot of debug code still in here. It's mostly intended for my own personal use though and it's pretty easy to use as is though.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

web_crawler-1.05.zip

https://dtbnguyen.blogspot.com/2019/10/web-crawler-script-random-stuff-and-more.html

WEB CRAWLER SCRIPT

My local setup often doesn't allow me enough control to run Python, Java, or Ruby based crawlers. Moreover, these solutions are often slow, lack flexibility, very limited, inefficient, lack feedback, etc... so I decided to build my own. It has some limitations such as not being able to deal with Javascript, being slow relatively to crawlers built using compiled languages, etc... but I'm fine with that as my needs are pretty basic. Good for automatically finding pages that may be useful but you don't know about. Use in combination with a mass downloader of some sort and it's obvious this can often be much more efficient then pure website downloaders such as Teleport Pro and HTTrack:

http://www.tenmax.com/teleport/exec/home.htm

http://www.httrack.com/

Given it's nature it's probably better suited for smaller projects.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

gather_files-1.00.zip

https://dtbnguyen.blogspot.com/2019/10/gather-files-script-random-stuff-and.html

GATHER FILES SCRIPT

Sometimes you want a scripted way to gather up types of files of a particular type for a given filesystem. This obviously has computer forensic/security purposes, general administration, and other purposes.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

compare_images-1.02.zip

compare_images-1.01.zip

https://dtbnguyen.blogspot.com/2019/09/compare-images-script-random-stuff-and.html

COMPARE IMAGES SCRIPT

I wanted a way to check for image similarity for a random bunch of images. It works well for certain types images but not not others. Needs a lot of tweaking/tuning on an individual basis to get it work well for a given set of images. Something that I wasn't expecting.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

4coder_downloader-1.00.zip

https://dtbnguyen.blogspot.com/2019/09/4coder-website-downloader-script-random.html

4CODER DOWNLOADER SCRIPT

Found some code on 4coder website which I thought was useful. The more I looked the more useful code I found so I built this script to download all relevant content.

If you're wondering why I didn't use a website downloader or mass downloader they aren't as quick/efficient as this and aren't as flexible when it comes to parsing pages and automation of downloads.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

assembly_analyser-1.03.zip

https://dtbnguyen.blogspot.com/2019/08/linux-assembly-analyser-script-random.html

ASSEMBLY ANALYSER SCRIPT

I wanted a way to check which assembly commands were used in what frequency for various binaries. This script obviously does this by disassembling binaries from the BINARY_PATH variable, counting up which commands are used most often, and generating statistics from this.

It's obviously useful for forensic and security analysis as well as general interest. It's not designed to be perfect, it just gives you an overview of things.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

mass_downloader-1.03.zip

https://dtbnguyen.blogspot.com/2019/08/mass-downloader-script-random-stuff-and.html

MASS DOWNLOADER SCRIPT

I need to build a mass downloader for a medical search engine prototype that I want to work on. This is a very simple prototype that I think may work as the basis for further development. If you're wondering this is simpler then a lot of crawlers/downloaders out there which means it may miss out on some files but is generally much quicker.

If you're wondering yes there are other options then this but most have their own set of problems. I can't use something like Selenium, Sikuli, wget, etc... because they're inefficient, require use of GUI, download everything before actually accepting/rejecting files, etc... This will work for a basic prototype. If I need something more I can always built out and upwards.

https://stackoverflow.com/questions/4602153/how-do-i-use-wget-to-download-all-images-into-a-single-folder-from-a-url

https://stackoverflow.com/questions/20116010/how-to-download-all-images-from-a-website-using-wget

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

https://sites.google.com/site/dtbnguyen/etherscan_analysis-1.02.zip

https://dtbnguyen.blogspot.com/2019/07/etherscan-analysis-script-random-stuff.html

ETHERSCAN ANALYSIS SCRIPT

I obviously work in the cryptocurrency space from time to time. Something which has been frustrating has been cryptocurrency market manipulation. I am aware of services which handle this but I wanted something of my own that I understood, could modify, could run locally, etc...

To run it, download the relevant CSV file from the etherscan.io website. Then run it as a parameter to this script:

https://etherscan.io/token/0x0d8775f648430679a709e98d2b0cb6250d2887ef

https://etherscan.io/exportData?type=tokentxns&contract=0x0d8775f648430679a709e98d2b0cb6250d2887ef&a=&decimal=18

The sample file included in this package was from a long while back for the BAT token.

It's basically been designed for you to pipe the results to less for basic manual analysis. I was going to add some form of statistical distribution analysis, artificial intelligence, etc... capability to this but it was too cumbersome and didn't really get me enough capability for the added code so I left it out. If you want it just use something like ministat.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

dell_serial_checker-1.00.zip

https://dtbnguyen.blogspot.com/2018/11/lost-dell-recovery-disc-youtube.html

DELL SERIAL CHECKER SCRIPT

After finding out that Dell servers were out of whack and needing to recover my operating system I wanted to see whether there was a pattern to Dell service tags so decided to see whether I could fuzz service tags. Not that easy because it requires speed at the network and processing level as well as time to deal with the CAPTCHA system that is in place. This script can produce ~50M service tags in roughly 5-6 hours on my local machine.

https://www.dell.com/support/home/au/en/aubsd1/product-support/servicetag/9cgm72s/diagnose

https://anti-captcha.com/mainpage

As such, I've left this program AS IS for the time being until I have time to build a higher performance option.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

https://sites.google.com/site/dtbnguyen/transcode_dvd_to_video_file-1.00.zip

https://dtbnguyen.blogspot.com/2018/10/linux-transcode-dvd-to-video-script.html

LINUX TRANSCODE DVD TO VIDEO SCRIPT

I've been looking for more storage space. Found out another easy way to achieve this is to transcode DVD ISO files into compressed video files.

I've looked at a few options in the past but they've all been slow, not free, provide imperfect transcoding, or else are too complex to use. This includes Nero Recode, Format Factory, Wondershare Video Converter, DVDFab, DVDShrink, etc...

https://alternativeto.net/software/formatfactory/

In the past, I've fiddled around with HandbrakeGUI but it's been lacking especially in the usability stakes. HandbrakeCLI seems to be much better though.

Overall, you're looking at going from a 4GB DVD down to about 700MB video file. In 'clean mode' (just main audio track) you can get 10 minutes transcodes. In 'all mode' (includes all audio and subtitle tracks) things take about an hour (same as most transcoding programs).

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

https://sites.google.com/site/dtbnguyen/retranscode_video_to_audio-1.03.zip

https://dtbnguyen.blogspot.com/2018/09/linux-retranscode-video-to-audio-script.html

LINUX RETRANSCODE VIDEO TO AUDIO SCRIPT

Due to jittery network connections I often prefer to download rather then stream multimedia live from the Internet.

Recently, I discovered that I ran out of storage space though. Obviously, I thought about deleting material but thought that an easier way to deal with it was to retranscode video to pure audio files.

Obviously, you can save gigabytes in several minutes using this technique. In fact, I saved about 1GB per minute but it can be slow going if you have a large number of files. Run it overnight, regularly via a cron job (or something similar), or in the background if that's the case.

If you're wondering it's not as easy as it sounds. You need to change parameters from time to time. Hence, the size of this script.

One bonus side effect of switching to m4a format is that it's supported by a lot of music players and smartphones out there.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

https://sites.google.com/site/dtbnguyen/file_organiser-1.02.zip

https://dtbnguyen.blogspot.com/2018/09/linux-file-organiser-script-random.html

LINUX FILE ORGANISER SCRIPT

The point of this script is to organise folders which are a mess and which can be (within reasonable bounds) automatically organised.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

https://sites.google.com/site/dtbnguyen/wifi_check-1.02.zip

https://dtbnguyen.blogspot.com/2018/09/wifi-check-security-script-random-stuff.html

WIFI CHECK SCRIPT

This is a quick script that you can run from time to time to check for perimeter security for your WiFi networks (obviously, on secure networks you'll require 802.1x or else no WiFi access at all), for wardriving, else for simple reconnaissance type work.

This script has advantages over things applications such as NetStumbler (on Windows), and WiFiAnalyzer (on Android) because it's flexible. You can hook up a polarised/high gain/Yagi antenna and you can modify it as you see fit.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

https://sites.google.com/site/dtbnguyen/seek_email_crawler-1.02.zip

https://dtbnguyen.blogspot.com/2018/09/seek-email-crawler-random-stuff-and-more.html

SEEK EMAIL CRAWLER SCRIPT

If you don't already know a lot of job boards have 'fake jobs'. These 'fake jobs' are posted by recruiters who are looking to increase the size of their candidate databases. To get around this particular problem I built this script. Just run with the correct parameters to extract relevant email addresses so that you can bulk email them.

As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

quick_code_ranker-1.01.zip

https://dtbnguyen.blogspot.com/2018/09/quick-code-ranker-script-random-stuff.html

QUICK CODE RANKER SCRIPT

Sometimes you just want to get an idea of what someone's code repository is like as opposed to running a full blown CI/CD

type environment such as Jenkins or TravisCI. That's the point of this script. I use in combination with by github_downloader.sh script in order to get a quick idea of what a developer's code repository is like

https://zeroturnaround.com/rebellabs/top-10-jenkins-featuresplugins/

https://docs.travis-ci.com/user/code-climate/

As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

mvs_to_other-1.03.zip

https://dtbnguyen.blogspot.com/2018/09/mvs-conversion-script-random-stuff-and.html

MVS TO OTHER SCRIPT

I obviously created my own personal version control system (called MVS) for a particular set of circumstances. The problem with MVS is it obviously isn't compatible with standard version control systems such as CVS, SVN, GIT, etc... That's the purpose of this script. Fill in the correct fields and run it to create a repository that is compatible with 'standard version control systems'.

As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

subdomain_resolve-1.02.zip

https://dtbnguyen.blogspot.com/2018/09/subdomain-resolve-security-script.html

SUBDOMAIN RESOLVE SCRIPT

Sometimes you end up on systems where you don't have administrative access. In spite of that you need to do a network/security test/audit. That's the point of this script. It's a very basic port of some DNS brute force enumeration scrpts such as the following (in fact some of the included files are actually from the following two projects):

https://github.com/guelfoweb/knock

https://github.com/TheRook/subbrute

to test for available servers and relies on utilities found on most UNIX/Linux systems.

Use in combination with tools such as Spiderfoot, my network_mapper.sh script, DNS, whois, other OSINT information, etc... to get valid IP information.

http://www.spiderfoot.net/

It basically tests for DNS resolution for particular server names. The set of scripts as is will work against arbitrary server/network setups. It will work better if you have prior OSINT, TECHINT, HUMINT, etc... This will allow for creation of custom fuzzers to increase your success rate against a given domain. Testing indicates that about 10K servers can be enumerated in roughly 10 minutes.

Obviously, I could have included automated crawling and bot capabilities but given my experiences previously with having to

maintain up keep against counterbot technology have decided to leave it out.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

web_traverser-1.02.zip

https://dtbnguyen.blogspot.com/2018/09/web-traverser-security-script-random.html

WEB TRAVERSER SCRIPT

Sometimes you end up on systems where you don't have administrative access. In spite of that you need to do a network/security test/audit.

That's the point of this script. It's a very basic port of a dot2moon.py

https://github.com/PsiqueLabs/dot2moon/

to test for directory traversal attacks against web servers and relies on utilities found on most UNIX/Linux systems.

It basically tests various URLs against a base URL and looks for HTTP return codes.

Obviously, I could have included automated crawling and bot capabilities but given my experiences previously with having to

maintain up keep against counterbot technology have decided to leave it out.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

create_usernames-1.01.zip

https://dtbnguyen.blogspot.com/2018/08/create-usernames-security-script-random.html

CREATE USERNAMES SCRIPT

If you're aware of tools like 'thc-hydra' then you'll be aware that you need a list of usernames to be able to run brute force attacks against servers. The problem is that you often don't have a list of usernames to start with. This is where this script comes in.

http://sectools.org/tool/hydra/

https://github.com/vanhauser-thc/thc-hydra

http://sectools.org/tool/medusa/

https://nmap.org/ncrack/

The input file requires names to allow the script to generate likely and probable usernames on the server or network in question.

Extract names via websites, social media networks, phone lists, direct contact, social engineering, OSINT, etc...

Example files have been included:

./create_usernames.sh names.txt > usernames.txt

Obviously, I could have included automated crawling and bot capabilities but given my experiences previously with having to

maintain up keep against counterbot technology have decided to leave it out.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

create_web_traffic-1.04.zip

https://dtbnguyen.blogspot.com/2018/08/web-traffic-creator-script-random-stuff.html

CREATE WEB TRAFFIC SCRIPT

From time to time you just want to test some basic stuff like network/website performance, bot and counter-bot systems,

logging, configuration, etc...

This script allows you to do most of the basics such as change of user agent, change of utility, change of time, etc...

Modify variables as you see fit (it's reasonably well commented).

Obviously, isolating/benchmarking is more difficult the more variables are involved and certain types of devices/products

may/will interfere. Please factor this in.

Use tail command against relevant log file for live checking or micro-SIEM capability. etc...

tail -f /var/log/apache2/access.log

tail -f /var/log/apache2/error.log

Can also be used a traffic generator obviously. Ability to detect such function is dependent on endpoint logging capabilities.

As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

get_sre_book-1.00.zip

https://dtbnguyen.blogspot.com/2018/08/google-site-reliability-engineering-sre.html

SRE BOOK DOWNLOADER SCRIPT

Recently, I came across a deal for a free SRE book (it's licensed fairly liberally so this is allowed).

Free eBook "The Site Reliability Workbook" @ Google

https://www.ozbargain.com.au/node/394231

https://landing.google.com/sre/book.html

https://landing.google.com/sre/book/index.html

https://creativecommons.org/licenses/by-nc-nd/4.0/

Since I prefer to have offline versions of books I came across I decided to modify my zorro-manual-compiler.sh script to do just this. This is the result.

Obviously, you can modify this and use something like Selenium to extract text only (see my seek_bot_selenium project)

but you may have to deal with formatting issues.

For those who are wondering this will work in principle with document upload websites such as:

https://www.scribd.com/

https://www.docdroid.net/

https://www.pdfpro.co/pdf-viewer

You just need to add some code to turn the page from time to time (such as xvkbd) and sleep between page turns so that everything renders correctly and completely.

Please see my blog and website for details on copying multimedia (such as music and video) online.

As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

audiobook_maker-1.00.zip

https://dtbnguyen.blogspot.com/2018/06/automated-audiobook-maker-script-random.html

AUDIOBOOK MAKER

I wanted a way to make audiobooks automatically. This is the result.

To use it just drop the relevant TXT, DOC, or PDF files into the current folder, comment/uncomment out the relevant function calls and run the script. Wait while the script runs and you'll have your audiobooks at the end.

Obviously, this is very useful for a multitude of reasons including, lack of current audiobook options, vision impaired people, other training options, etc...

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

opera_cache_extractor-1.01.zip

https://dtbnguyen.blogspot.com/2018/06/opera-browser-cache-extractor-random.html

OPERA BROWSER CACHE EXTRACTOR SCRIPT

For some sites extracting content (streamed music and video in particular) can be difficult unless you're willing to go down

the protocol level analysis route. Even with the advent of mass and automated downloaders, browser extensions and addons, etc... you need to do a packet/network trace to get what you want. Examine youtube-dl and you'll have a better understanding of what this entails.

I've obviously looked around at most utilities but when in a hurry and these options aren't working I've always found a useful

'fallback position'. Namely, extracting files from relevant browser cache directories. It was cumbersome so I wrote this script

to automate things. The periodic wait and copy structure is there for those websites which choose to encode/encrypt the cached files in question upon complete download.

It's obviously been tested with Opera but should work with any browser which functions similarly. Relevant packages are the following.

dpkg -i opera_11.60.1185_i386.deb

dpkg -i install_flash_player_10_linux.deb

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

software_mirror-1.02.zip

https://dtbnguyen.blogspot.com/2018/05/foss-software-mirror-script-random.html

FOSS SOFTWARE MIRROR SCRIPT

If you've ever administered a network of any sort you'll have needed to download/update your software on a regular basis. Over time, you get tired of manually doing this so I created a script to automate it.

It obviously mostly deals with standard FOSS software that you would normally expect to be deployed in a Windows network. If you understand the code you can easily expand it to arbitrary application sources. Some sources are easier to automate then others. The applications in question represent a reasonable subset though and give you an idea of how to expand to any application you might imagine.

Deploy network wide using GPO, WAPT or any other network orchestration that may be applicable in your network.

https://www.wapt.fr/en/

https://support.microsoft.com/en-au/help/816102/how-to-use-group-policy-to-remotely-install-software-in-windows-server

https://www.advancedinstaller.com/user-guide/tutorial-gpo.html

Am aware of similar things such as Ninite out there but I want something that is less opaque.

https://ninite.com/

https://alternativeto.net/software/ninite/

At some point down the line I may be building a WSUS compatible service (for those without the budget to deploy Windows Servers in their network) in future but it's likely to be non-free. Watch this space...

https://en.wikipedia.org/wiki/Windows_Server_Update_Services

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

my_launchpad_lightshow_mk2-1.00.zip

https://dtbnguyen.blogspot.com/2018/05/novation-launchpad-lightshow-random.html

MY LAUNCHPAD LIGHTSHOW MK2 SCRIPT

Wanted to see whether I could program a lightshow (independent of a DAW) for my Novation Launchpad device. Found the following library/code:

https://github.com/FMMT666/launchpad.py

https://github.com/FMMT666/launchpad.py/network

https://github.com/siddv/launchpad.py/network/members

There's obviously a lot that you can do with this device especially with the full colour models that move beyond the original RGB modes. The code in this script allows the a Novation Launchpad Mk2 to spell out out letters, create random light colour sequences, create worm like movement of lights, etc...

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

book_downloader-1.04.zip

https://dtbnguyen.blogspot.com/2018/05/book-downloader-script-random-stuff-and.html

BOOK DOWNLOADER SCRIPT

From time to time you just want to download a bunch of documentation on a particular topic. This script automates that process by scraping relevant links via Google and then downloading it.

To use it just alter the "keyword" variable in the script and then run it. Else, modify the relevant part to take first argument

as part of keyword search to use things from command line. I like to keep it in a script to keep a track of what I've already

looked up and downloaded.

Use it in combination with my whitepage_examine.sh script to compare and summarise documents quickly. This script obviously uses large amounts of code from my compare_social_media.sh and email_harvester.sh scripts. Very useful for training and/or study and much easier then looking up educational websites. You may be surprised at how much good free stuff is out there.

I obviously considered using links obtained from alternate search engines but I found out from email_harvester.sh that the level of indexing by Google seems to be much better then the others. Hence, I've left the code out. If you want those results it's obviously not too difficult to port. Ironically, you'll discover that while search engines are good in some areas they're very weak in others. You're best bet is to use specialised search engines if you want to hone in on a particular area. Testing hasn't been easy because of the way replication, caching, networks, etc... works. You can't get consistent data even if the same requests have been made within seconds of one another.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

apt_change_check-1.04.zip

https://dtbnguyen.blogspot.com/2018/04/apt-change-request-check-script-random.html

APT CHANGE REQUEST CHECK SCRIPT

Ever go through a change request and it doesn't quite end up the way you planned it? This sort of helps with that.

It basically does a track of all related dependencies of a relevant DEB package. Then it tracks the tree to see what underlying programs are likely to be impacted in the event of change. It also incorporates an Internet connectivity check since so many programs are dependent on this function nowadays.

Run it in combination with my network_mapping script/program to smooth out change requests in general as it can track which programs, services, infrastructure, and network devices are likely to be impacted.

http://dtbnguyen.blogspot.com/2018/04/network-mapping-tool-random-stuff-and.html

It's definitely not perfect and works only on APT/Debian based systems at this stage (I may port it to other systems later on. The task clearly isn't that difficult).

Sample output is in output.txt and was create through the following command:

./change_check.sh yelp sim > output.txt

It obviously also has an offline download package capability for those systems that need to remain offline.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

network_mapper-1.03.zip

https://dtbnguyen.blogspot.com/2018/04/network-mapping-tool-random-stuff-and.html

NETWORK MAPPER SCRIPT

Ever entered a new network and you didn't quite know what was available, wanted to do an inventory check/audit, or you want to run a set of simple tests to see what sort of security profile your network is currently at? That's the purpose of this script.

I had intended for it to be more customised but found that nmap contained much of the functionality that I wanted so I basically stuck with that. It can be slowish but it's still much faster then a lot of network auditing, mapping, and vulnerability checkers out there.

Obviously, it can be used for attack as well as defense purposes. Read through for a better understanding. Ideally, you'll be running this against your own network. Turn off security detection systems prior to running this so as not to set off spurious alerts.

Integration of this into your existing network will give you micro-SIEM capability (use in combination with something like fail2ban and/or RRD type databases. This will give you better data on where attacks are coming from).

https://en.wikipedia.org/wiki/RRDtool

https://en.wikipedia.org/wiki/Fail2ban

https://en.wikipedia.org/wiki/Security_information_and_event_management

Usage is as follows. You can just fill in the network variable in this script with what you want and then run it, you can fill in

target hosts in the 'scan_hosts.txt' and then run it, or you can run it in combination with hosts from the command line itself.

Usage Examples:

./network_mapper.sh

./network_mapper.sh 127.0.0.1/32

./network_mapper.sh 192.168.0.1/32

./network_mapper.sh 192.168.0.0/24

./network_mapper.sh www.microsoft.com

./network_mapper.sh www.car.com www.dog.com

Go through the following if you want to add more functionality...

detect common vulnerabilities nmap

https://blah.cloud/security/scanning-for-network-vulnerabilities-using-nmap/

https://nmap.org/nsedoc/index.html

https://nmap.org/nsedoc/categories/discovery.html

https://askubuntu.com/questions/723172/what-tool-can-i-use-to-scan-a-network-and-test-wake-on-lan

Note, if you aren't great with CIDR format there are plenty of tools out there to help you with this:

network range to cidr format converter

https://ip2cidr.com/

https://www.ipaddressguide.com/cidr

https://mxtoolbox.com/subnetcalculator.aspx

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

compare_social_media-1.04.zip

https://dtbnguyen.blogspot.com/2018/03/compare-social-media-script-random.html

COMPARE SOCIAL MEDIA SCRIPT

Given the high level use of social media bots now it's hard to guage what the difference is between companies/organisations

out there in the social media arena and why certain individuals and organisations may be significantly more popular then others.

The purpose of this script is basically do a side by side comparison of social media presence by rendering a PDF version of the website (based on category being searched), then creating a JPG version of the websites in question so that you can easily compare/contrast things.

I've used wkhtmltopdf because it's much quicker, smaller, lightweight, etc... then other options that I've looked at in the past.

Sample data was run using 'mobile+payments' as keywords obviously. I've removed jpg_folder and pdf_folder to reduce archive download size obviously. thumbnail_folder obviously contains pictures of social media sites in in question.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

whitepaper_examine-1.03.zip

https://dtbnguyen.blogspot.com/2018/03/whitepaper-examine-script-random-stuff.html

WHITEPAPER EXAMINE SCRIPT

Ever get sick of trying to examine whitepapers! I have. That's the purpose of this particular script/program. Drop in a bunch of

whitepapers (in PDF format), then run the script and wait while they are analysed.

I've obviously done some sample analysis. Got all sample data from:

https://icoranker.com/ico/category/banking/

Script was run as follows:

./whitepaper_examine.sh cleanup

./whitepaper_examine.sh > results

Results are obviously in 'results' file

error files contain errors during conversion from PDF files

spell files contain misspelt words

txt files contain converted PDF files in TXT format

word files contain word frequencies

summary files contain summary of documents being examined

I've deleted all the scanned PDF content to ensure a small archive download as well as converted JPG files. Word of warning, if you have a large number of files to deal with execution of this script/program may take a long time. There are feedback mechanisms included though.

This script clearly isn't perfect and is basically meant to confirm problems and/or bias.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

make_pictures-1.01.zip

https://dtbnguyen.blogspot.com/2018/03/random-picture-makingblending-script.html

MAKE PICTURES SCRIPT

Sometimes, organisations don't have the resources to be able to hire someone fulltime to be able to deal with graphic design issues. Hence, you need to make the best of what you already have.

That's the purpose of this script/program. Basically, it runs it through ImageMagick's convert utility with multiple parameters and also attempts to blend your input picture with those placed in the 'blending' folder. At the end, just pick a picture that you may like for your current marketing campaign effort.

This is clearly much quicker then using Photoshop or GIMP. I prefer to use it in unison with it though. In a few minutes you can have something 'fresh' for anything you might want to post out.

See the following website for examples of similar scripts (though the terms of these particular scripts are somewhat onerous):

http://www.fmwconcepts.com/imagemagick/index.php

I've done my best to find genuinely free (of royalty, cost, etc...) 'sample pictures' for you to blend pictures with but they've

become increasingly difficult to find. Sample output was obtained by running following command:

./make_pictures.sh 48555_XXX_v2.jpg

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

email_harvester-1.05.zip

https://dtbnguyen.blogspot.com/2018/02/email-address-harvesting-script-random.html

EMAIL ADDRESS HARVESTING SCRIPT

Recently, I noticed people trying to hunt down emails for companies and organisations that they found via Google. It was slow, manual/laborious work, etc... I decided to look around for an email harvesting program. I figured out they didn't really work very well (missed results) or were very slow (this version effective works by crawling the entire site and then finding any relevant strings that look like email addresses). The irony is that both paid and unpaid programs were similar.

Hence, I decided to build my own. Very basic and as much a proof of concept as something that is of utility.

I obviously used core extractor code/concepts from the following program/s:

https://github.com/maldevel/EmailHarvester

https://github.com/laramies/theHarvester

https://github.com/midwire/email_spider

https://github.com/SimplySecurity/SimplyEmail

https://github.com/bbb31/parvist

https://github.com/opsdisk/theHarvester

https://github.com/moein7tl/email-extractor

It works slightly differently then some of the scripts/programs mentioned above. Basically, it scans results from search engines based on user supplied keywords. Thereafter, it scans certain pages from these domains to extract email addresses. Initial analysis indicates it works faster and often achieves better coverage then some of the above options.

Obviously, I did look at trying to overcome JavaScript based SPAM protection systems such as Cloudfare but that obviously would have required a lot more work and would have likely slowed down program function (likely would have done it via headless browser options or web test suites such as Selenium). Hence, it's omission for the time being.

It can obviously be easily modified for phone number extraction as well and can be used for a multitude of reasons including marketing, penetration testing, plain research, counter-SPAM work, etc...

Function 'main' is the core of this script. If you want to alter function this is basically where the primary alterations will be made. The main ones you'll be interested in are google and parse_file. Example of use are as follows:

google "keyword"

google "keyword1+keyword2"

parse_file url_file.txt

Output obviously contains both debug as well as email addresses. Just use grep to look for only @ symbol to get only email addresses.

Sample output of following is contained in venture_capital.txt for your perusal:

google "venture+capital"

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

comment_bot-1.01.zip

https://dtbnguyen.blogspot.com/2018/01/social-media-comment-bot-random-stuff.html

COMMENT BOT

This is basically the skeleton code for a social media bot. Integrate the code with components mentioned in relevant blog article for an increased, automated presence via social media platforms.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

download_docs-1.01.zip

DOWNLOAD DOCS

My WikiReader recently developed a strange error whereby the screen just goes dark upon loading (doesn't feel like a hardware fault because the boot screen showing normal characters shows up first). Have been trying to figure out a solution or else an alternative in the meantime. Difficult though as documentation out there is minimal and the company has basically shut down? Have created a headless crawler to gather what I need online and provide similar functionality offline?

It uses wkhtmltopdf which is a lot lighter then a lot of JavaScript (and other) based options PDF distiller options out there. A sample file is obviously included. It was created as follows:

./download_docs.sh https://en.wikipedia.org/wiki/Lockheed_Martin_F-22_Raptor

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

get_asx_data-1.02.zip

https://dtbnguyen.blogspot.com/2019/12/asx-share-analyser-script-random-stuff.html

GET ASX DATA

I wanted to know basic long term data about price movement from the ASX based on information that I found online at the following site:

https://www.asxhistoricaldata.com/

https://www.asxhistoricaldata.com/archive/

The basic results of were calculated by extracted files from the following archives. Then running this script across them.

1997-2006.zip

2007-2012.zip

2013-2016.zip

Basic data says that about a small proportion of ASX stocks go up over given period, many go down over given period, while the most/rest go off market. Data is obviously included in this archive. Clearly indicates that picking shares isn't as easy as it sounds on the ASX (even for professionals who deal with IPOs in the first place).

$ tail -n 4 results-*

==> results-19970102-20061229.dat <==

[Total Stocks] - 731

[Total Up] - 135

[Total Down] - 84

[Total Off] - 512

==> results-20070102-20121231.dat <==

[Total Stocks] - 1452

[Total Up] - 168

[Total Down] - 401

[Total Off] - 883

==> results-20130102-20161230.dat <==

[Total Stocks] - 1284

[Total Up] - 373

[Total Down] - 320

[Total Off] - 591

There is stodgy data in 2013 to 2016 period but I haven't really bothered to look further as this was mainly about checking a theory that I was interested in and overall results/data output seem fine.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

create_blogger_chapters-1.01.zip

https://dtbnguyen.blogspot.com/2018/01/blogger-to-markdown-for-books-random.html

CREATE BLOGGER CHAPTERS

Wanted a script/s which could collate book chapters (or even a complete book) from my blog. This is the result. Basically, use the exported XML backup file from blogspot.com to create book chapters from it.

create_blogger_chapters.sh is obviously the core driver file but the others can obviously be used on their own.

Example of usage is the following:

./create_blogger_chapters.sh blog-03-12-2017.xml

Modify collate_book.sh to change subject/s being looked at.

Modify compile_book.sh to change document output format/s.

Modify blogger2book.sh to change the way core XML file is divided.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

make_cover_letter-1.03.zip

https://dtbnguyen.blogspot.com/2017/12/custom-linux-bash-seek-cover-letter-and.html

LINUX BASH MAKE COVER LETTER

After building my resume maker, job application bot, and cover letter maker I wanted to see whether I could built a custom resume maker. This is the result of that particular little experiment.

It works very simply. Fill in the correct variables in this script and it will build you resumes of various types, based on job

class/type and provide you with short, mid, or full sized resumes and various output formats (Please skim through the entire script. Everything is there and it's pretty simple to modify to ensure you aren't wasting your time trying to figure something out that is obvious.).

It relies on pandoc as I've discovered it's one of the more flexible and comprehensive markup format handlers out there.

It does this to save on time and to help deal with issues during updates.

When combined with an automated job application system (such as the one I built earlier for Seek job application website) it also helps to increase the chances of being able to get past any existing filtering systems as well.

http://dtbnguyen.blogspot.com/2017/05/linux-seek-job-application-bot-random.html

I have to admit that while the HTML output is pretty ugly the PDF and DOC output options are really good given how simple the code is. Example files were created via following command:

./make_custom_resume.sh "https://www.seek.com.au/job/35103952?type=standard&userqueryid=88446476173ca3ccb21f09e843373336-6517556" all

Please note this script is only designed to work with Seek but can easily be modified to work with other websites.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

make_custom_resume-1.01.zip

https://dtbnguyen.blogspot.com/2017/12/custom-linux-bash-seek-cover-letter-and.html

LINUX BASH MAKE CUSTOM RESUME

After building my resume maker and job application bot I wanted to see whether I could built a cover letter bot. This is the result of that particular little experiment.

It works very simply. Fill in the correct variables in this script and it will build you resumes of various types, based on job class/type and provide you with short, mid, or full sized resumes and various output formats (Please skim through the entire script. Everything is there and it's pretty simple to modify to ensure you aren't wasting your time trying to figure something out that is obvious.).

It relies on pandoc as I've discovered it's one of the more flexible and comprehensive markup format handlers out there.

It does this to save on time and to help deal with issues during updates.

When combined with an automated job application system (such as the one I built earlier for Seek job application website) it also helps to increase the chances of being able to get past any existing filtering systems as well.

http://dtbnguyen.blogspot.com/2017/05/linux-seek-job-application-bot-random.html

I have to admit that while the HTML output is pretty ugly the PDF and DOC output options are really good given how simple the code is. Example files were created via following command:

./make_cover_letter.sh "https://www.seek.com.au/job/35103952?type=standard&userqueryid=88446476173ca3ccb21f09e843373336-6517556" all

Please note this script is only designed to work with Seek but can easily be modified to work with other websites.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

resume-maker-1.03.zip

https://dtbnguyen.blogspot.com/2017/12/linux-bash-resume-maker-random-stuff.html

LINUX BASH RESUME MAKER

After seeing a few people construct automated resume builders online I wanted to build my own. It works very simply. Fill in the correct variables in this script and it will build you resumes of various types, based on job class/type and provide you with short, mid, or full sized resumes and various output formats (Please skim through the entire script.Everything is there and it's pretty simple to modify to ensure you aren't wasting your time trying to figure something out that is obvious.).

It relies on pandoc as I've discovered it's one of the more flexible and comprehensive markup format handlers out there. It does this to save on time and to help deal with issues during updates.

When combined with an automated job application system (such as the one I built earlier for Seek job application website) it also helps to increase the chances of being able to get past any existing filtering systems as well.

http://dtbnguyen.blogspot.com/2017/05/linux-seek-job-application-bot-random.html

I have to admit that while the HTML output is pretty ugly the PDF and DOC output options are really good given how simple the code is. Example files were created via following command:

./resume_maker.sh ref gen

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

quick_browser-1.02.zip

LINUX BASH QUICK BROWSER SCRIPT

Over time, you would have noticed the increased propensity of websites to become more heavily dependent on JavaScript (and

(associated libraries such as jQuery, AngularJS, etc...) and browsers to become slower. This has led to rediculous load times on 'lesser systems'. Hence, I decided to build my own 'mini_browser'. It basically takes takes input from a file called 'bookmarks.txt' (add links to this file as we wish). It then builds a menu system from this and uses a local Linux CLI based browser such as links, lynx, elinks, etc... from which to render and 'dump' an existing webpage or else you can browse it like normal if you use the 'browse' option on command initialisation.

If you prefer speed you'll obviously prefer the standard functionality(don't add 'browse' option) at runtime. As a consequence of it's construction it also happens to skip over advertising as well as some types of paywalls as well.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

downsize_pictures-1.01.zip

https://dtbnguyen.blogspot.com/2017/11/linux-picture-downsizing-script-random.html

LINUX PICTURE DOWNSIZING SCRIPT

Picture quality on modern cameras/smartphones is high now to the point whereby storage and transmission of them has become extremely inefficient. This script is designed to deal with it. Basically, drop it in the relevant folder with the JPEG/picture files and run it to create more efficient/dramatically lower sized pictures that are put in the 'downsized' directory. Savings of up to five times with little reduction in discernible drop in picture quality are not uncommon thanks to 'convert' utility.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

make_blog_post_news-1.01.tar.gz

LINUX BASH CLI BASED NEWS AGGREGATOR

If you're not aware there's actually quite a lot of shenanigans out there when it comes to news and PSYOPS.

https://www.itwire.com/enterprise-solutions/79542-google-offers-olive-branch-to-online-publishers.html

http://dtbnguyen.blogspot.com.au/2016/07/social-engineeringmanipulation-rigging.html

For that reason, I decided to build my own mini news aggregator. It basically works by scanning the homepage of relevant websites, finding links, and then displaying them. This then allows me to easily pick out news items that I am interested them by running them through a grep command with the most relevant command line switches which can easily be used as a basis for a proper news aggregator website based news aggregator.

https://sites.google.com/site/dtbnguyen/linux-dictionary-tools.zip

https://sites.google.com/site/dtbnguyen/

Obviously, it's relatively simplistic but it does exactly what I need without requiring huge amounts of computing resources (and using it with 'less' makes for a very light and quick overview of the daily news then opening your browser, loading the webpage, etc...) Obviously, it can be easily modified for custom purposes as well.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

bible-code-1.06.tar.gz

https://dtbnguyen.blogspot.com/2017/07/bible-codes-random-stuff-and-more.html

BIBLE CODE FINDER

After coming across a book and some documentaries on Bible Codes I decided to build a program/script which would examine arbitrary books for codes based on Equidistant Letter Spacing codes. This is obviously the culmination of this work.

To use it run driver.sh (modify driver.sh if need be) to scan files of your choosing. Obviously, the core metrics you'll be looking at are:

[original_file_word_count (extracted equidistant words)]

[analysis_file_word_count (words found in dictionary)]

[normalised_word_metric (analysis_file_word_count/original_file_word_count)]

Obviously, the higher the normalised_word_metric the better the higher likelihood there is a code/entropy in the text that you are currently examining.

...

The results of my experiment are included in the analysis_results folder and analysed_results.txt file. Clearly, most of the results

indicate few if any coded words and those results that do indicate coded words aggregate around 0.05 to 0.2

[normalised_word_metric (analysis_file_word_count/original_file_word_count)]

Moreover, the incidence of codes arise just as often in novels and children's books as from many religious texts?

It obviously has other other uses as well such as stenography and computer security and forensics.

One caveat is that this script/program was built in a very short space of time which means that it might not be as efficient as it could possibly be had I spent more time on it. It can take anywhere from a few minutes to several hours to scan a few files depending on the files you intend to process and the setup you currently have in your environment.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

https://sites.google.com/site/dtbnguyen/nodep_deb-1.01.tar.gz

https://dtbnguyen.blogspot.com/2017/06/no-dependency-debian-packages-random.html

NO DEPENDENCY DEBIAN PACKAGES

If you need to work a lot with various software packages you eventually you come across one situation over and over again. Namely, sometimes someone has mis-packaged a piece of software which makes it impossible to install even if you've technically fulfilled all the correct pre-requisites. This script deals with this problem by modifying the necessary 'control file data' in Debian packages so that it no longer requires any dependencies allowing it to be installed quickly and easily.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

https://sites.google.com/site/dtbnguyen/mvs-1.06.tar.gz

https://dtbnguyen.blogspot.com/2017/05/micro-versioning-system-mvs-random.html

MICRO VERSIONING SYSTEM

Increasingly, RCS and newer versioning systems are no longer being installed on newer systems and some of the newer versioning system too frustrating to work with. This is particularly the case for some of the major online ones which are painful to setup, use, and sometimes offer not much more security then you can establish by yourself.

Hence, I built my own Micro Versioning System (MVS). It's obviously extremely simple and is designed for single person use but it does the job. Just modify the relevant variables (repo_dir and work_dir). Usage should be reasonably self explanatory. May build a more complete and higher performance system down the track for multiple users?

Personally:

- repo_dir is structured as (1.00, 1.01, 1.02, 1.03, etc...)

- work_dir can be anywere that you want as long as it's relatively clear of extraneous files. You'll need to make additions to exclusion lists in this script if not

- I copy mvs.sh to anywhere that I want and run it from there rather then installing but what you do is up to you

That said, you can structure things anyway you want as long as you understand it yourself.

It's obviously designed to be easily extendable but it's also obvious that many of the functions of a more professional versioning system are already included. Clearly, it's designed for smaller projects that are based around text based files (may change/extend this at another time) though.

Hit the 'q' button from time to time if the program/script seems unresponsive (it's just the way the less command works). I built this in a very short amount of time so things mightn't be as smooth as they could be.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

https://sites.google.com/site/dtbnguyen/news_homepage_bias-1.03.tar.gz

NEWS HOMEPAGE BIAS CHECKER

This script is an iteration of my news_page_bias.sh script. Just a few extra tweaks here and there to faciltate looking at news hompages instead of RSS feeds. This should provide for better metrics.

Will scan folder for a file which is the first argument. This file is scanned for news page home pages which are then checked for bias based on a pretty simple set of metrics. At some stage down the line I may re-write this or other pieces of software to do more thorough checking.

Obviously, it's pretty rudimentary and reads feeds included in the links_news_homepage.txt file. Add feeds as you want. Comment out newsfeeds that are irrelevant using the "#" symbol like in Python and BASH (some feeds aren't really possible to check because of their structure or you can't get a decent gauge of bias because the size of the feeds vary drastically).

It's not supposed to be taken too seriously (though I may write something more relevant later on?).

I've been very surprised/perplexed by some of the results (a good example of this is the following. A lot of websites that don't look biased seem to be while others that seem more neutral?) That said, since it's doing the check on a very small sample that often differs from site to site which makes adequate quantification of bias very difficult.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

https://sites.google.com/site/dtbnguyen/seek_bot-1.05.tar.gz

https://dtbnguyen.blogspot.com/2017/05/linux-seek-job-application-bot-random.html

SEEK JOB APPLICATION BOT

This script is to facilitate searching for jobs that are easy to apply for on the Seek website. Obviously, really basic but does the job by iterating through all relevant job links and then parsing the content of job application pages to see whether they can be applied for easily or not. If they are flag it, else ignore it. Change relevant code inside of seek_bot_selenium.sh and elsewhere you can also automatically apply for jobs using the other included experimental files in this package.

More interesting for me is that this set of scripts can easily be modified to also customise job applications based on job descriptions. Fun project for someone?

Look at my Seek Menu project if you want further details of links that can/should be supplied to seek_bot_selenium.sh

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

get_spectrogram-1.04.tar.gz

https://dtbnguyen.blogspot.com/2017/05/song-sound-and-polygraph-spectrum.html

GET SPECTROGRAM

This script is help analyse music and video. Namely, convert them to wav format so that we can later extract a spectrogram from them for further analysis. It's obviously fairly basic at this stage but it's good enough for the type of sound analysis that I intend to do. Further development based on need.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

music_composer-1.09.tar.gz

https://dtbnguyen.blogspot.com/2017/05/perl-algorithmic-music-midi-composer.html

PERL ALGORITHMIC MUSIC MIDI COMPOSER

This script is to facilitate automated composition of music. It's obviously fairly basic at this stage and requires tools from the following page to work:

http://www.fourmilab.ch/webtools/midicsv/

http://www.fourmilab.ch/webtools/midicsv/midicsv-1.1.tar.gz

http://www.fourmilab.ch/webtools/midicsv/midicsv-1.1.zip

Nonetheless, based on what I've seen this is a lot easier to understand, extend, and work with then a lot of other MIDI libraries out there which are often too complex or too basic. You should be able to get something very useful working from this even if you don't know much about music itself.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

youtube_downloader-1.06.zip

youtube_downloader-1.05.zip

youtube_downloader-1.04.tar.gz

https://dtbnguyen.blogspot.com/2018/01/net-neutrality-or-googleyoutube.html

https://dtbnguyen.blogspot.com/2017/04/youtube-news-downloader-script-music.html

YOUTUBE NEWS DOWNLOADER

This script is to aide downloading of online videos for an arbitrary YouTube channel video webpage. The main reason why I created this was because of dodgy local Internet connectivity and resources whch caused all sorts of wierdness on my local browser.

As far as I know, this should be allowed/perfectly legal in most jurisdictions because this is effectively works in the exact same fashion that most web browsers work 'behind the scenes'. For instance, look at about:cache (or the Temporary Download folder) in many browsers and look at the largest files in them. They're often pure video which can be viewed by VLC.

It works by using the following tool and grabbing the pure video (or audio) stream only (you can change it if you know how. Pretty simple) and downloading it into a relevant file/container:

https://rg3.github.io/youtube-dl/

https://github.com/rg3/youtube-dl

https://rg3.github.io/youtube-dl/supportedsites.html

Obviously, this script can be made to work for a whole host of websites out there and can be run at regular intervals to save you the trouble of having to manually reach go to a website. Do more work to parse the JSON and there's a whole heap of interesting stuff you can do with it.

As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

news_feed_bias-1.02.tar.gz

news_feed_bias-1.00.tar.gz

https://dtbnguyen.blogspot.com/2017/04/news-feed-bias-checker-random-stuff-and.html

https://dtbnguyen.blogspot.com/2017/04/news-bias-checker-2-random-stuff-and.html

NEWSFEED BIAS CHECK

Obviously, really basic (and built from my Linux Planet Blog Checker Script, http://dtbnguyen.blogspot.com/2017/04/linux-planet-blog-checker-script-github.html) but gives a good estimate... You won't understand some of the choices unless you understand a bit more about politics (think a little bit and you'll realise how biased our lives can often be?).

Will scan the the folder for a file called newsfeeds.txt. This file is scanned for RSS news feeds which are then checked for bias based on a pretty simple set of metrics. At some stage down the line I may re-write this or other pieces of software to do more thorough checking.

Obviously, it's pretty rudimentary and reads feeds included in the newsfeeds.txt file. Add feeds as you want. Comment out newsfeeds that are irrelevant using the "#" symbol like in Python and BASH (some feeds aren't really possible to check because of their structure or you can't get a decent gauge of bias because the size of the feeds vary drastically).

It's not supposed to be taken too seriously (though I may write something more relevant later on?).

I've been very surprised/perplexed by some of the results (a good example of this is the following. A lot of websites that don't look biased seem to be while others that seem more neutral? That said, since it's doing

the check on a very small sample that often differs from site to site which makes adequate quantification of bias very difficult.

File included called bias_check.txt contains results of a scan completed as of the day this post was published... Below are results.

As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

github_downloader_info_pack-1.00.tar.gz

github_downloader_info_pack-1.01.zip

GITHUB INFO

This script is to facilitate information collection for potential downloading of Github repositories for an arbitratry user or group.

As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

planet-check-1.02.tar.gz

https://dtbnguyen.blogspot.com/2017/04/linux-planet-blog-checker-script-github.html

LINUX PLANET BLOGGER CHECK

This script is to facilitate information collection for Linux/FOSS related blog feeds to https://planet.linux.org.au/ Note, that it's very basic and doesn't really weed out based on metrics that could be more fair but it does give you a good idea of what's going on.

As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

blogger2book-1.06.tar.gz

https://dtbnguyen.blogspot.com/2017/03/life-in-yemen-blogger2book-bash-script.html

BLOGGER2BOOK

This script is to facilitate conversion of blogger XML backup files to a book/PDF format, after it was found that there were no real lightweight mechanisms of achieving this.

It works by conducting an XSL transform on the original XML file, splitting it into individual HTML files, making use of wkhtmltopdf to convert the HTML files to PDF format, and then finally using pdfunite to build a single compiled document that is easier to read on portable document devices. It also creates a HTML menu navigation file as well.

As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

seek_menu-1.03.tar.gz

https://dtbnguyen.blogspot.com/2017/03/prophetsgenesisterraforming-mars-seek.html

CREATE SEEK MENU

Recently Seek changed their website such that you could no longer browse via standard hyperlinks and had to resort to a klunky menu based navigation mechanism. This script is to get around that particular problem. Just run it to create a new Seek menu just like the old one that is relevant for ICT workers. Else, just use the relevant files which were created by me in this package.

As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

github_downloader-1.08.tar.gz

github_downloader_info_pack-1.01.zip

https://dtbnguyen.blogspot.com/2017/03/prophetspre-cogsstargate-program-8.html

GITHUB DOWNLOADER

This script is to facilitate downloading of Github repositories for an arbitrary user or group.

As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

rssread-1.11.tar.gz

http://dtbnguyen.blogspot.com/2017/01/linux-bash-cli-rss-reader-explaining_28.html

LINUX BASH CLI RSS READER

This script is to facilitate reading of RSS feeds from the Linux CLI. Obviously, it's pretty basic and reads feeds included in the newfeed file. Add feeds as you want. Extractor code for for OPML/XML files is included further down here.

As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

soundcloud-1.09.sh.zip

http://dtbnguyen.blogspot.com/2015/10/geo-politics-soundcloud.html

SOUNDCLOUD MUSIC DOWNLOADER

This script is to facilitate automated retrieval of music from the website, http://www.soundcloud.com/ after it was found that existing website download programs such as Teleport Pro, HTTrack, and FlashGet were too inefficient.

It works by reverse engineering the storage scheme of files on the website, the lack of need for registration and login credentials, and taking advantage of this so that we end up with a more efficient automated download tool.

Obviously, the script can be modified on an ad-hoc basis to be able to download from virtually any website. As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

'Uncompyle2' Updates

http://dtbnguyen.blogspot.com/2015/07/python-decompilation-max4live.html

UNCOMPYLE2 UPDATES

This is an updated version of 'uncompyle2', https://github.com/Mysterie/uncompyle2 which contains a flaw which doesn't allow for building of RPM packages. This is an updated version with built pre-built RPM and DEB packages. Running 'alien' allowed conversion of the RPM to a DEB package for easy installation on a Debian based platform. See the following blog post for more details.

ID3 Music Organiser Script

ID3 MUSIC ORGANISER SCRIPT

These series of scripts are used to organise a group of ripped MP3 files that have ID3 tags but not the correct file and folder names. It does so by calling a series of commands to rename files based on ID3 tag information and then attempts to move or rename files and folders based on artist or album. Using just 'master.sh' you can organise by album/artist but by using 'organise.sh' you can organise based on the more conventional artist/album system as used by Microsoft and Apple.

Either way, Windows Media Player should eventually figure out how to reorganise your files based on the information, files, and folders that are supplied (WMP only seems to get the ID3 tag information based on my recent experience which is why I built these scripts) by these scripts.

Obviously, you can run these scripts on an ad-hoc basis and/or you can also run it continuously with a scheduling program such as 'cron'. You also need the following utilities to be installed, id3, eyed3, and uuid-runtime.

As this is the very first version of the program (and I didn't have access to all test data while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

create_empty_structure-1.00.zip

CREATE EMPTY STRUCTURE

I guess the following script is an iteration of another script that I wrote.

http://dtbnguyen.blogspot.com.au/2015/01/scripting-electronic-and-musical.html

https://sites.google.com/site/dtbnguyen/mkempty-1.01.zip

It's a script that I created to save space. It creates a copy of the filesystem hierarchy at a remote location locally with zero sized files to save space. https://sites.google.com/site/dtbnguyen/create_empty_structure-1.00.zip

http://dtbnguyen.blogspot.com.au/2015/02/more-scripting-copy-reaktor-and-musical.html

mkempty-1.01.zip

MKEMPTY

A script that I created to save space. It works by using dd to reduce all relevant files to zero size. It contains comments to make it customisable and creates log files just in case something goes wrong.

http://dtbnguyen.blogspot.com.au/2015/01/scripting-electronic-and-musical.html

download_date_sections.sh.zip

DOWNLOAD DATE SECTIONS VST4FREE

I recently wanted to download al the applications/archives from a particular website, http://www.vst4free.com/ so I looked at various website download programs (HTTrack, Teleport Pro, wget, curl, etc...). In spite of the filters/wildcards that were available they were too slow to be realistic.

What did I do? I built an automated crawler/downloader/scrapper because I noticed patterns on the way files were encoded.

http://dtbnguyen.blogspot.com.au/2015/01/building-reaktor-synthesisers-download.html

hl1110lpr_3.0.1-2_i386.deb

hl1110cupswrapper_3.0.1-2_i386.deb

BROTHER HL-1110 DEBIAN DRIVERS

I've been meaning to purchase a new toner cartridge for my Brother HL-2140 laser printer for a short while now but noticed that the price of cartridges are multiples of their cheapest laser printer at 'Officeworks'.

http://www.openprinting.org/printer/Brother/Brother-HL-2140

The only problem is that you may need to update your drivers. I wasn't able to find any relevant Debian packages after a quick search online. I converted from what was available of RPM packages online. The existing driver for the Brother HL-1110 prints nothing but blanks at this stage on some version of Linux.

http://dtbnguyen.blogspot.com.au/2015/01/printing-re-spinning-and-musical.html

create_empty_structure-1.00.zip

REAKTOR SOFTWARE SYNTHESISERS

Some music software synthesisers that I've built to learn about Reaktor (and general) software synthesiser development.

Stereo-Mixer-Example-1.ens

http://dtbnguyen.blogspot.com.au/2015/02/more-scripting-copy-reaktor-and-musical.html

Multiple-Oscillator-Sawtooth-Triangle-Sine-Filter-Interface-Delay-4.ens

Multiple-Oscillator-Sawtooth-Triangle-Sine-Parabol-Impulse-Pulse-Filter-Interface-Delay-5.ens

Multiple-Oscillator-Sawtooth-Triangle-Sine-Parabol-Impulse-Pulse-Filter-Interface-Delay-Pan-6.ens

Multiple-Oscillator-Polyphonic-Selector-Filter-Interface-Delay-Pan-7.ens

http://dtbnguyen.blogspot.com.au/2015/01/building-reaktor-synthesisers-download.html

Single-Oscillator-Sawtooth-1.ens

Single-Oscillator-Sawtooth-Filter-2.ens

Multiple-Oscillator-Sawtooth-Triangle-Sine-Filter-Interface-3.ens

http://dtbnguyen.blogspot.com.au/2015/01/printing-re-spinning-and-musical.html

'Firehol' Updates

firehol_1-1_all.deb

firehol-1-0.noarch.rpm

'FIREHOL' UPDATES

Recently, I've been working on various software projects. One of them has involved integrating 'firehol' (a firewall management system) into a larger project of mine (more details later). Even though it is clear that the project was fairly mature it hasn't really been kept up to date of late. One of the main problems in my situation was the automated building of 'RPM' and 'DEB' packages. Digging through the various configuration files it was obvious that there some things that needed changing.

...

As this is the very first version of the program (and I didn't have access to all test data while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

...

NOTE - the maintainer of the project has been contacted but has thus far not responded to any communication. The file involved is 'firehol-1.273.tar.bz2' downloaded from, http://en.sourceforge.jp/projects/sfnet_firehol/releases/ with the following MD5 checksum, 'cbbe1ba21cf44955827d5c906a55aa21'. For those who are lazy, I've uploaded updated files to:

https://sites.google.com/site/dtbnguyen/firehol-1-0.noarch.rpm,

57455222f6e5d8840bbf019751ade88b

https://sites.google.com/site/dtbnguyen/firehol_1-1_all.deb,

cd083ffa6285ccfc6661f41d78a74da9

https://sites.google.com/site/dtbnguyen/

mail-mydocs-1.12.tar.gz

MAIL QUOTA/MY DOCUMENTS SCRIPT

This script is to determine the current mail usage, mail quota, and 'My Documents' levels for an organisation for all users in an organisation by parsing the contents of the Maildir/maildirsize file and doing comparisons between various directory sizes. In this case, 'My Documents' is shared with 'MailDir' and the mail quota size is calculated by querying against an LDAP server. Obviously many files are used for processing and formatting in order to achieve the final graphs and various other statistics. Ultimately, a summary of all these details are sent to sysadmin@company.com

Obviously, you can run this script on an ad-hoc basis and/or you can also run it continuously with a scheduling program such as 'cron'. You also need the following utilities to be installed, gnuplot, ImageMagick, and ldapsearch.

http://adminschoice.com/crontab-quick-reference

http://www.math-linux.com/spip.php?article45

As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

mail-over-1.02.sh.gz

MAIL OVER SCRIPT

This script is to determine the current mail usage and quota levels for an organisation for all users in an organisation by parsing the contents of the Maildir/maildirsize file. The results are not perfect since it doesn't examine every component of this particular file but in most cases it will provide you with a fairly good overview of what is currently going on. It outputs two files. One is mailq.txt which provides an overview of email use in the entire organisation while mailqo.txt provides details of only those people who are currently over 80% usage of overall mail quota (a sample from these files is provided below). It will then send a copy of these reports to sysadmin@company.com

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

ldap-passwords-1.02.tar.gz

LDAP PASSWORD SCRIPT

This script is designed to generate and then implement new passwords for users network wide via LDAP scripts. The format for the input file (users.txt) is "Username SomethingElse" while the format for the output file (passwords.txt) is "Username Password".

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

idle-1.02.tar.gz

IDLE WATCH

This work is a continuation from that of a German programmer who originally intended to warn (and subsequently automatically logout) users who were idle during terminal sessions. The original tarball is available from the following location and is also included in this tarball for reference.

http://www.filewatcher.com/b/ftp/ftp.mao.kiev.ua/pub/software/Linux/system/admin/idle.0.0.html

Several modifications have been made to the original program. First, this version runs entirely 'silently' (there is no indication to the end user that the program is running). Second, it does not log users out. Third, the file which holds the usernames of those users to be excluded from monitoring has been changed from /etc/nologout to /etc/idle_nolog.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

elastix-batch-extensions-1.01.tar.gz

ELASTIX BATCH EXTENSIONS SCRIPT

This script is designed to create a group of extensions suitable for export to the Elastix PBX phone system. Information required for extension creation is taken from names.txt which contains names of people in the format of "FirstName LastName" on each line as well as the variable EXT which is the starting extension in the sequence. A suitable CSV file for import can then be used by logging into the Elastix web interface and then going to PBX -> Batch Extensions.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

zarafa-ab-1.05.sh.gz

ZARAFA ADDRESS BOOK SCRIPT

This script is designed to extract information required for an Address Book from an OpenLDAP database and then produce a suitable CSV file for import into an address book such as those used by Outlook, Thunderbird, and Zarafa.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

ldap-report-1.02.tar.gz

LDAP REPORT SCRIPT

This script is designed to generate LDAP text and PDF reports to send to a particular end email address (in this case, sysadmin@company.com) based on a LDAP query of a particular server named (funnily enough) ldap. Please modify to suit your particular circumstances. It is designed to be used from the BASH shell and the gcc compiler is required in order to compile the text2pdf.c program.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

makevpn-1.02.sh.gz

MAKEVPN FOR DD-WRT SCRIPT

This script is designed to generate an OpenVPN configuration suitable for use with DD-WRT on a Linksys WRT-54GL wireless router. Please modify to suit your particular circumstances. It is designed to be used from the BASH shell.

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

ldap-backup-1.01.sh.gz

LDAP BACKUP SCRIPT

Please note that this is a port of an existing script produced by J. P. Block for the Mac OS X Operating System and is designed backup your OpenLDAP database. It can be run via cron jobs as well as manually.

http://havegnuwilltravel.apesseekingknowledge.net/2005/02/automatically-back-up-your-ldap.html

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Outlook-1.01.zip

OUTLOOK ROAMING PROFILE COMPATIBILITY SCRIPT

One of the biggest problems with using Microsoft Outlook on a computer network using roaming profiles is the fact that settings aren't saved because of the location where these settings are normally saved. These scripts are designed to address these particular problems by moving them to the correct locations on logon/logoff.

To use them open up gpedit.msc, User Configuration > Windows Settings > Scripts Logon/Logoff and then select the relevant logon files for logon/logoff. Obviously, you'll need a Perl interpreter for these scripts to be run. A good, free choice is ActivePerl, http://www.activestate.com/activeperl/

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

Roaming-Profile-Cleaner-1.04.zip

ROAMING PROFILE CLEANER

In a roaming profile environment one of the problems faced by users and administrators alike is dealing with the people who go over their size limit. There are a number of ways to deal with this particular problem. One is to increase their roaming profile size limit, another is to tell users to delete their files every once in a while. These scripts, (one for use in a Linux environment and the other for a Windows environment) are designed to delete as many files as safely possible to keep users under quota. As this is the very first version of the program it may be VERY buggy. Also please, check that the files being deleted aren't important to users in your particular environment and that the starting directory from which files are being deleted are correct. Obviously, you'll need a Perl interpreter for these scripts to be run. A good, free choice is ActivePerl, http://www.activestate.com/activeperl/

X-Lite-1.01.zip

X-LITE ROAMING PROFILE SETTINGS SAVER

One of the biggest problems with using X-Lite on a computer network using roaming profiles is the fact that settings aren't saved because of the location where these settings are normally saved. These scripts are designed to address these particular problems by moving them to the correct locations on logon/logoff.

To use them open up gpedit.msc, User Configuration > Windows Settings > Scripts Logon/Logoff and then select the relevant logon files for logon/logoff. Obviously, you'll need a Perl interpreter for these scripts to be run. A good, free choice is ActivePerl, http://www.activestate.com/activeperl/

As this is the very first version of the program it may be VERY buggy. Please test prior to deployment in a production environment.

SRHC, Simple Rapid HOWTO Creator

SRHC, pronounced 'SHR-C', Simple Rapid HOWTO Creator

This shell script provides a rapid means by which to write an XML Docbook conforming document suitable for submission to the LDP (Linux Documentation Project), http://www.tldp.org. As this is the very first version of the program it is VERY buggy. It is also extremely limited in that it currently does not encode character entities, nor does it check whether your input is vaild or not. To use it place it in an empty folder, run it, and follow the prompts. At this stage I do not think that I will be developing this program further. However, I consider the source to be extremely simplistic and that if someone were willing to apply themselves quite good markup could be created. This program is distributed under the terms of the GNU GPL License.

linux-dictionary-tools.zip

TOOLS FOR EDITING THE 'LINUX DICTIONARY'

These tools are designed to aid people in the editing and modification of the 'Linux Dictionary', http://www.tldp.org/LDP/Linux-Dictionary/

SpeechSuite.zip

SPEECH SOFTWARE PROJECT

The SpeechSuite group of programs is basically an attempt at using speech recognition/synthesis in real world applications. It is currently composed of three programs :-

    • GUIShellTutor

GUIShellTutor is an extension of the GUIShell project whose primary aim is to make the shell interface more useable. The key differences though are that it possesses a speech synthesis and recognition capability. Also there is no 'real program'. All the source code is generated by the runme.sh file. Though this is slow it will call other scripts which will search through the entire system based on your PATH environment, get their 'whatis' descriptions, sort them and format them. During this process all files are stored in *.tml files. Finally, the file is split alphabetically and stored in *.bml files. After this, the Java source code is created and moved into their correct locations, 'ant' (a more advanced version of 'make' and was designed with Java in mind) is called and the main interface is brought up. Please note that for the speech capabilities (and hence this program) to work the 'festival', 'ant', and 'sphinx' programs must be installed in accordance with the instructions below. Note that you only need to run the runme.sh script once, thereafter you can simply launch the GUIShellTutor main interface via 'java GUIShellTutor' or any of the 'letters' via 'java -jar bin/(Some Letter)Digits.jar' from the location of your sphinx installation.

    • HouseDigits

HouseDigits is a program that provides a more user friendly interface to the k74 command line program. However, it is also designed specifically to be used with Binh Nguyen's Home Control Prototype Device (HCPD), a device which provides a real world example of an aesthetically pleasing and truly inexpensive voice activated power control mechanism. To run the program just type 'java -jar bin/HouseDigits.jar' Please note, that if you wish to run HouseDigits it is advisable for you to install the enhanced k74 program available on this webpage.

    • CustomDigits

CustomDigits allows you to run customised commands. The interface is identical to that of GUIShellTutor. However, in order to run customised commands you must first alter the Custom.bml file and organise it so that each line is composed of a command, a tab, and then a description of that command. After doing so, run the runme.sh script, and to run CustomDigits just type 'java -jar bin/CustomDigits.jar'

At this stage I don't think I'll be developing this suite of applications further. However, if there are enough people who want it, I'll consider it.

Eliza.zip

COMPUTER CONVERSATION SIMULATOR

Lizzy is an extension to the Java implementation of Eliza written by Charles Hayden, http://www.chayden.net/eliza The primary enhancement is that it also possesses a voice synthesis capability providing for a more realistic and engaging computer conversation simulation experience. To 'speak' with Lizzy, simply type your topic of conversation into the textfield at the bottom of the window and click 'OK'. Operation of the rest of the program should be intuitive. It should be noted that in order for the speech synthesis capability and hence this program to work it is required that 'festival', http://www.cstr.ed.ac.uk/projects/festival/ be installed.

k74-1.1.zip

PARALLEL PORT UTILITY PROGRAM

This version of the K74 program is designed to be specifically used with the SpeechSuite group of programs and Binh Nguyen's Home Control Prototype Device (HCPD), a device which provides a real world example of an aesthetically pleasing and truly inexpensive voice activated power control mechanism.

config.sh.zip

CONFIGURATION CLEANER

This script is designed to address the the problem of having a cluttered home directory full of "dotfiles". It basically moves all of your configuration files into a .etc directory which is located in your home directory and creates symblic links to them so that existing applications remain unbroken. This is a first release so there may be bugs. You should be in your home directory as you run it. However, I've left this condition out so that you may test it on a non-critical directory before trying it in your home directory. To use the new layout run the program with the "new" parameter. To revert to the old layout run it with the "old" parameter.

GUIShellTutor

README

GUIShellTutorFast

READMEFast.txt

VOICE RECOGNIZING/SYNTHESISING PROGRAM LAUNCHER/SHELL

This program is an extension of the GUIShell project whose primary aim is to make the shell interface more useable. The key differences though are that it possesses a speech synthesis and recognition capability. Also there is no 'real program'. All the source code is generated by the runme.sh file. Though this is slow it will call other scripts which will search through the entire system based on your PATH environment, get their 'whatis' descriptions, sort them and format them. During this process all files are stored in *.tml files. Finally, the file is split alphabetically and stored in *.bml files. After this, the Java sourcecode is created and moved into their correct locations, 'ant' (a more advanced version of 'make' and was designed with Java in mind) is called and the main interface is brought up. Please note that for the speech capabilities (and hence this program) to work the 'festival', 'ant', and 'sphinx' programs must be installed in accordance with the instructions included in the README file. Note that you only need to run the runme.sh script once, thereafter you can simply launch the GUIShellTutor main interface via 'java GUIShellTutor'or any of the 'letters' via 'java -jar bin/(Some Letter)Digits.jar' from the location of your sphinx installation. GUIShellTutorFast is a version of GUIShellTutor that speeds up generation of code by a factor of four.

Sample Technology Devices

Instructions for building the HCPD (PDF Format)

INSTRUCTIONS FOR BUILDING THE HCPD (HOME CONTROL PROTOTYPE DEVICE)

There currently exist many solutions by which to achieve home automation. However, in my opinion these solutions are either prohibitively expensive or are well.... ugly. With these issues in mind the Home Control Prototype Device (HCPD) was born. It basically makes use of the K74v2 PCB from, http://www.kitsrus.com and the associated software created by James Cameron, http://quozl.netrek.org. Please note, that the instructions that follow will be rather spartan given that I will assume that most people will have their own interpretation of how their device should look. Also, note that I am not a qualified electrical technician and that the wiring looks a bit peculiar due to my lack of materials. Please read this entire document (most especially the Disclaimer) before you make a decision as to whether you would like to build such a device.

Sample Technology Articles

Fedora 9 (Networking & Firewall Setup)

These articles focus on mechanisms through which one can establish network connectivity and the basic steps towards building a firewall in Fedora 9.

The Future of the Linux Desktop

Steve Ballmer once said that "innovation is not something that is easy to do in the kind of distributed environment that the open-source/Linux world works in". He argued that Microsoft's customers "have seen a lot more innovation from us than they have seen from that community" and that "Linux itself is a clone of an operating system that is 20-plus years old...." (http://rss.com.com/2008-1082-998297.html).

Indeed, I must agree with most of what he says. However, it is not without some derision that I do so. After all, if we were to examine the core of the Microsoft Operating System we could also come to the same conclusion. For example, the basis for the latest and greatest iteration of the Microsoft Operating System, 'Windows XP' is itself based on the Windows NT kernel which was conceived some 20-25 years ago. The GUI (Graphical User Interface) that has become so ubiquotous and at times beguiled is nothing more than an evolution of what was created at XEROX Parc Laboratories some 30 years ago and TCP/IP? Well, we all know about that....

Shell Talk

A few months ago we were asked to complete a program that was supposed to increase the usability of the Linux/UNIX shell interface as a university project. The basic idea was to provide a GUI (Graphical User Interface) that would give access to the most commonly used utilities on the command line, their options, as well as making the interface as usable and intuitive as possible. We were given a rough idea of how it was to be implemented and all in all most people ended up creating a very basic file manager like interface with a set of buttons representing commands running down the left hand side, a group of options represented as checkboxes next to them and then a command line at the bottom of the window which would allow for manual typing/editing of commands....

Can GNU/Linux (Commercially) Survive?

A recent post to a local LUG mailing list made set my mind wandering. The predicament was thus, believe it or not, an enterprise class server, a system worth tens of millions of dollars was purchased without an operating system. Furthermore, the proprietary operating system that was originally intended to be used and optimised for that machine was substantial to say the least (approximately $40000 AUSD). Hence, this user wanted to know whether Linux could be run on such a system. In short, the answer was yes. However, compromises had to be made but since this is not the topic that I wish to discuss I shall not delve into this issue any further.

In light of recent events, (especially in regards to SCO and its efforts to litigate IBM, Novell, all in all, almost the entire Linux world) I have had to begin to question the very nature of open source, how it has managed to alter the face of computing and most importantly, its impact upon software developers...

A Short Walk Through /usr

The /usr directory usually contains by far the largest share of data on a system. Hence, this is one of the most important directories in the system as it contains all the user binaries, their documentation, libraries, header files, etc.... X and its supporting libraries can be found here. User programs like telnet, ftp, etc.... are also placed here. In the original Unix implementations, /usr was where the home directories of the users were placed (that is to say, /usr/someone was then the directory now known as /home/someone). In current Unices, /usr is where user-land programs and data (as opposed to 'system land' programs and data) are. The name hasn't changed, but it's meaning has narrowed and lengthened from "everything user related" to "user usable programs and data"....

Linux-on-the-HP-NX5000 (PDF Format)

This document will review Linux on the HP NX5000 laptop computer. Supposedly, the first ever laptop to be designed with Linux in mind by a major computing company.

Email: dtbnguyen(at)gmail(dot)com

Website: http://sites.google.com/site/dtbnguyen/

Blog: http://dtbnguyen.blogspot.com/

LinkedIn: https://www.linkedin.com/in/dtbnguyen

GitHub: https://github.com/dtbnguyen

SoundCloud: https://soundcloud.com/dtbnguyen

Instagram: https://www.instagram.com/dtbnguyen2018/

YouTube: https://www.youtube.com/channel/UCwVJG67iHHPbmBxuHVbyOlw/playlists

Scribd: https://www.scribd.com/dtbnguyen

Academia: https://independent.academia.edu/BinhNguyen100

Academic Room: http://www.academicroom.com/users/dtbnguyen

Facebook: http://www.facebook.com/dtbnguyen1234

Tumblr: http://dtbnguyen.tumblr.com/

Twitter: https://twitter.com/dtbnguyen

Lastfm: http://www.last.fm/user/dtbnguyen

OzBargain: https://www.ozbargain.com.au/user/156239

Whirlpool: http://forums.whirlpool.net.au/user/690316

Beatport: http://dj.beatport.com/dtbnguyen

Coursera: https://www.coursera.org/user/i/598b31ea3113438c624cd0ad4cf97618

Individurls: http://individurls.com/myfeeds/dtbnguyen/

AboutMe: https://about.me/dtbnguyen

This page is rarely and irregularly updated.