IPstreet312 est le pseudonyme – très certainement d’un développeur ou contributeur sur GitHub – derrière le populaire dépôt open source “freeiptv”. Voici ce qu’on peut en dire :
Sur GitHub, ipstreet312/freeiptv est un projet public hébergeant des playlists .m3u qui permettent d’accéder à des chaînes IPTV gratuites, notamment françaises, turques, balkaniques, arabes, etc. (github.com)
Cette playlist “all.m3u” regroupe par centralisation et agrégation de nombreuses chaînes (BFM, CNEWS, France 24, Euronews, TV5MONDE, etc.) et reçoit régulièrement des mises à jour. (scribd.com)
Le dépôt a remporté plus de 249 étoiles et 56 forks, ce qui montre une certaine popularité au sein de la communauté IPTV. (github.com)
En résumé, ipstreet312 est une personne (ou un alias) dotée et héritant d’une forte culture du stream 😀 qui gère et publie des listes de chaînes IPTV gratuites sur GitHub, particulièrement via le dépôt “freeiptv”.
Les sources incluses dans la playlist .m3u sont issues de chaînes librement accessibles sur internet non payantes comme : France 24, Euronews, TV5 Monde, chaînes locales, etc. Ces chaînes offrent parfois des flux .m3u8 officiels. Le flux est officiellement mis à disposition par le diffuseur ou via un partenaire (ex : chaîne d’information publique).
Installer les chaînes françaises – Get free tv channels – Türk tv kanallarını izle m3u listesi ile
✅ Comment utiliser les fichiers IPTV (.m3u) du dépôt ipstreet312/freeiptv ?
Les fichiers .m3u sont des playlists qui contiennent des flux vidéo en direct (souvent au format .m3u8) lisibles via certains logiciels.
▶ Méthodes courantes :
Avec des applis sur TV (Smart IPTV, OTT Navigator,TiviMate…)
Avec des Media Player (VLC Windows/Mac/Linux…)
Avec Kodi (multiplateforme)
Avec des applis mobiles (IPTV Smarters, GSE IPTV…)
My sources for link feeds are also from : https://github.com/ipstreet312?tab=following & https://github.com/ipstreet312?tab=stars
Many Thanks a lot to hayatiptv, iptv-org, inspirationlinks, LaQuay, BG47510, rideordie16, muratflash, iptvmix, mchoumi, schumijo, RokuIL, hemzaberkane, Paradise-91, LeBazarDeBryan, Sibprod, UzunMuhalefet, LITUATUI, streamlink projects tools on github especially
numpy (pip install numpy — or install Anaconda distribution)
Keras 1.2.0+, but less than 2.0 (pip install keras==1.2)
Theano or Tensorflow. The code is fully tested on Theano. (pip install theano)
Usage
While any run is going on, the results as well as the AI models will be saved in the ./results subfolder. For a complete run, five experiments for each method, use the following command (may take several hours depending on your machine):
./run.sh
NOTE: Because the state-shape is relatively small, the deep RL methods of this code run faster on CPU.
Alternatively, for a single run use the following commands:
--mode can be either of dqn, dqn+1, hra, hra+1, or all.
Demo
We have also provided the code to demo Tabular GVF/NO-GVF methods. You first need to train the model using one of the above commands (Tabular GVF or no-GVF) and then run the demo. For example,
An iOS application to display the location and statistics of MLB players on the field in real-time.
This brand new iOS baseball app rethinks the way spectators watch America’s pastime game. Baseball Spectator will ignite a newfound passion for baseball by providing an individualized, augmented reality experience for both newbies and devoted fans.
Description
Baseball Spectator is a landscape iOS application used while spectating a baseball game in person. It enhances the ball game experience of the user by allowing them to be easily versed in the stats of the current game and the stats of each player. More specifically, while the user points their camera at the field (must have a view of at least the whole infield), it provides the real-time location of each player, their corresponding individual information and statistics, and a virtual scoreboard. The target audience is either devoted baseball fans who are curious for a deeper analysis of the game or nieve fans who are simply looking for basic information about the current game.
Capabilities
User End
Display real-time position of players through a circle indicator placed underneath each player
Click on the player indicator to show a player info bar (player name and number)
Click on the player info bar to open up an expanded view of the individual player’s statistics
Display the current score, inning number, outs, strikes, and balls in a scoreboard in the top left
Click on the scoreboard to open up an expanded inning by inning scoreboard with additional game statistics
(For app demonstration purposes) Import your own video from storage for analysis through the import button on the top right of the screen
Toggle between displaying stats for fielders versus batters
Developer End
Retrieve realtime game stats from an MLB administered website
Locate the coordinates of each of the players on the field, each of the infield bases, and each of the locations players are expected to be standing
Identify the user’s location (which stadium) using their phone GPS
TODO (desired but uncompleted capabilities)
Automatically identify which base is home plate without the user manually selecting home plate
Identify which players are on which team (for now, the app uses a toggle button to switch between defense and offense)
Make the color thresholding for image processing more adaptable to varying lighting conditions (right now the thresholding works well with the exception of dark overcasting shadows — however, shadows should not be much of a problem since when large shadows start appearing on the field, the stadium light are quickly turned on, fixing the problem)
App View Descriptions
Main View
Displays the scoreboard and camera footage marked up with the player indicators. This view is the central view of the app that provides navigation links/buttons pointing to the two main expanded views. If a player indicator is tapped, a brief statistics bar opens up. If the brief statistics bar is tapped, the player statistics expanded view opens up. If the scoreboard in the upper left is tapped, the scoreboard expanded view opens. It also has a toggle that allows the user to toggle between seeing the batters versus hitters.
Scoreboad Expanded View
Displays in a higher level of detail the current score of the game, including inning by inning scores, total errors of each team, and more.
Player Statistics Expanded View
Displays in a higher level of detailed information about the selected player including their picture, current game stats, 2020 season statistics, and career statistics. The view also displays a brief overview of the entire team’s statistics at the bottom including their number of wins, losses, percent wins, and current league standings.
You can find the API package under .node-red/node_modules/node-red-contrib-iris/intersystems-iris-native. Please check the README file for supported operating systems. If your OS is not supported you can get the API from your Intersystems IRIS instance under: ~/IRIS/dev/nodejs/intersystems-iris-native.
See the documentation for how to load additional modules into Node-RED.
Download Node.IRISInterface
Go to raw.githubusercontent. Do a right click on the page and choose Save Page As… . Afterwards go to the InterSystems Management Portal and navigate to System Explorer > Classes and click on Import. There you select the file you just downloaded and click Import.
When you only operate in one namespace, import the class into this namespace. When you have multiple namespaces you want to have access to, map the class to namespace %ALL.
Connect to IRIS
Set connection properties via the node properties. The Node will build a connection when you deploy and will hold that connection up until you redeploy or disconnect manually.
You can set the default properties in ~/.node-red/node_modules/node-red-contrib-iris/ServerProperties.json. Or use the SetServerProperties flow under Import > Examples > node-red-contrib-iris > SetServerProperties.
Usage
The nodes are secure against SQL injection by parametrize the statements.
Pass the SQL statement as a string in the msg.data field and the node will parameterize the statement itself.
msg.data="SELECT * FROM NodeRed.Person WHERE Age >= 42 AND Name = 'Max' ";
Or a parameterized statement:
msg.data={sql: 'SELECT * FROM NodeRed.Person WHERE Age >= ? AND Name = ? ',values: [42,'Max'],};
Nodes
IRIS – A Node for executing DML statements such as SELECT, UPDATE, INSERT and DELETE and DDL statements such as CREATE, ALTER and DROP in Intersystems IRIS.
IRIS_CREATE – Creates a class in Intersystems IRIS.
IRIS_DELETE_CLASS – Deletes a class in Intersystems IRIS.
IRIS_INSERT – A Node for only SQL-INSERT-Statements. Can also generate the class, if it does not already exists, based on the statement.
IRIS_OO – Can insert a hierarchical JSON-Object.
IRIS_CALL – Call Intersystems IRIS classmethods.
See Node description for further informations.
Bugs
Currently does not work in Docker Container!
The statement will be parametrized wrong if whitespaces and commas used in strings. Please parametrize the Statement before. Example:
Does not work:
msg.data="SELECT * FROM NodeRed.Person WHERE Name = 'Smith, John'";
But this will work:
msg.data={"sql":"SELECT * FROM NodeRed.Person WHERE Name = ?, "values":["Smith, John"]}
In the above example, we’ve designed the subscriber, the FooItems class, to declare an array of strings correlating to properties in the store’s state. If you’re from the Redux world, this is akin to “connecting” a consumer to a provider via higher-order function/component.
After the subscribe call is made, your bindSubscriber function will be called where you can pass along the default values as you see fit.
NOTE: In general, you should try to use a simple data structure as the second argument to subscribe; this ensures your bindings have generic and consistent expectations.
dispatch(type, payload)
Requests a state change in your store.
We can extend the previous example with a setter to call dispatch:
Now when the addItem method is called, Core Flux will pass along the action type and payload to your reducer.
The reducer could have a logic branch on the action type called ADD_ITEM which adds the given item to state, then returns the resulting new state (containing the full items list).
Finally, the result would then be handed over to your bindState binding.
NOTE: Much like in subscribe, it’s best to maintain data types in the payload so your reducer can have consistent expectations.
Bindings
Here’s a breakdown of each binding needed when initializing a new store:
bindSubscriber(subscription, state)
subscription ([subscriber, data]): A tuple containing the subscribed object and its state-relational data. state (object): The current state object.
Called after a new subscribe is made and a subscription has been added to the store. Use it to set initial state on the new subscriber. Use the data provided to infer a new operation, e.g., setting a stateful property to the subscriber.
reducer(state, action)
state (object): Snapshot of the current state object. action ({ type: string, payload: object }): The dispatched action type and its payload.
Called during a new dispatch. Create a new version of state and return it.
bindState(subscriptions, reducedState, setState)
subscriptions (subscription[]): An array containing all subscriptions. reducedState (object): The state object as returned by the reducer. setState (function):
Called at the end of a dispatch call, after your reducer callback has processed the next state value. Set your new state back to subscribers and back to the store. It’s possible and expected for you to call bindSubscriber again to DRYly apply these updates. You can return from this function safely to noop.
Exposing the store
For utility or debugging reasons, you may want to look at the store you’re working with. To do so, you can use the __data property when creating a store:
Hii it is just my first project in java lang . Well it is also my first app and after 3 month of working , here is its beta version 1.0Beta. it may be full of error , coz we know Error is the best teacher for a programer .
ok let’s see a brief introduction on it.
Introduction
What is SafeGuard ?
In single sentence
SafeGuard is a totally offline assistant app that can detect some SOS Signal and can do some predefined work to deal with that situation.
Well i made this app specialy for protect female. We all know that girls/women are really not safe at this time.There are many cruel criminals outside home.Thus this app is in under Development it may arise many bugs and errors. You can report that by the in app Report button or you may contact me on this (error368280@gmail.com) email and report problem so that we can make the app better for decrease a little amount of the rate of crime.
Actually i forgot to say “why this app is necessary?”. Actually people can call 112 at the emergency situation but there are many examples of the situation where the victim doesn’t have the time to open his/her phone and call the emergency number. They are busy by either running from the criminal or by dealing with that painful situations. Didn’t they can survive if atleast his/her family or friends (whom they trust) was informed that there might be something wrong with their close one ? In my opinion yes it must be. See , in this situation they must atleast call to the person who may be in problem and confirm is everything all right!! Actually here how my app works 😅
Before we know how to use it or how it works , lets know some features that my app have.
features
Shake detection
Voice command
How it works ?
My app has a continuously Listening function so In a SOS situation the user can give some predefined voice commands or shake the device that give signal to the app that “User is in problem , we must help her”. By doing this a overlay pop-up will be display to confirm is it accidentally or not. If the user doesn’t give any response to it in 30 seconds, it will trigger its SOS mode.
In future i will also add some other way to detect the signal and also improve this app as better i can. Below i have put some important note to use my app.
IMPORTANT
I told that this app is under development and has a continuously listening funtion and also offline , it gonna process all the data in your device . So it can use cpu and battery too much.So when you feel safe and generaly when you at home or any other safe place stop the service from app. App>Home>Stop Button
Start the service only when you feel unsafe . It may not gonna hang your phone but it may take much battery. But you can keep the service ON always if you want . But highly recommend to Start the servie before leave home (generaly at night).
Sometime it may detect false things so when your device vibrate , it show a confirmation window . Remember to check it.
NOTE
Since due to privacy , Android restrict Background mic access .So the continuous listen function may misbehave or sometimes it may stop listening . In this case other trigger signal will work so I recommend you to use the shake function and other functiions (i will add soon). If you are using MIUI/Xiaomi phone , there is high chance that it will restict my voice trigger method. Sorry for that. But i am trying to fix this as soon as possible.
There is a high chance for monthly update . Thus i didn’t add any update alert method so pls check this site for update the app. Soon i will upload it in Google play store (if i got the permission).Later you can update from there.
Atlast my last word is-
STAY SAFE , STAY HAPPY , LIVE LIFE WITH JOY AND ENJOY YOUR FREEDOM. IT’S UR RIGHTS ~ERROR
Bricked Up is a full-stack web app that aggregates and analyzes LEGO deals 🧩 scraped from public sources. Users can browse deals, filter and sort listings, save favorites, and gain insights with interactive price indicators.
Bricked Up solves the challenge of finding the best LEGO deals in a user-friendly, responsive, and automated way. By leveraging scraping, APIs, and automation, it ensures LEGO enthusiasts never miss out on a great deal. 🧱✨
✨ Features
🛒 View Deals: Browse through aggregated LEGO offers.
📊 Relevance Score:
Each deal is scored based on its popularity, discount, freshness, and resalability metrics.
Relevance helps users prioritize the best deals.
🔍 Interactive Filters:
🏆 Best Discount
🔥 Hot Deals
📈 Popular Deals
Relevance-based sorting
📊 Deal Insights:
Average and percentile price indicators.
Expiration countdown for time-sensitive offers.
❤️ Save Favorites: Mark and revisit your favorite deals.
🌗 Dark Mode: Toggle between light and dark themes.
🔄 Automated Refresh: Deals update daily at 5 AM and 6 PM UTC+2.
📱 Responsive Design: Works seamlessly on all devices, with optimized modals and layouts.
🛠️ How It Works Accordion: Guides users on searching, sorting, and understanding the scores.
🛠️ Technologies Used
Frontend: HTML, CSS (Bootstrap 5) 🎨, JavaScript ⚡
Backend: Node.js with Express.js 🚀
Database: MongoDB Atlas 🗄️
Web Scraping: Puppeteer 🕷️, Cheerio 🌿
Deployment: Vercel 🛠️
Automation: GitHub Actions 🕒
📸 Screenshots
A clean, interactive homepage for LEGO enthusiasts.
Seamless switch to dark mode.
Key price insights with visual indicators.
📖 Understanding the Relevance Score
The Relevance Score is a calculated metric that helps users identify the best deals. It evaluates:
Discount: The percentage off the original price.
Popularity: Based on the number of comments and likes.
Freshness: How recently the deal was published.
Resalability: Resale potential based on average resale prices and listing activity.
Temperature: A deal’s popularity among community users.
Expiry: Whether the deal is expiring soon.
The score ranges from 0% (low relevance) to 100% (high relevance).
📊 Relevance Score Explained
The Relevance Score is a metric (ranging from 0 to 1) used to rank LEGO deals based on their value and appeal. It evaluates multiple factors with assigned weights to provide a comprehensive score.
This website aggregates publicly available data for educational and informational purposes only.
🔒 No malicious intent is associated with data scraping. For any concerns, feel free to contact me.
This repository provides the instructions to add the AWX requirements for Junos automation.
This repository doesn’t install AWX. You still need to install AWX yourself.
This repository has automation content to:
configure an existing AWX setup
If you want to consume Ansible content using AWX, you can use this repository to quickly add it to AWX.
to consume AWX
you can use this repository to execute playbooks with REST calls.
How to use this repo
The steps are:
Install AWX. This repository doesn’t install AWX. You still need to install AWX yourself.
Install the requirements to use Ansible modules for Junos
Add the Juniper.junos role from Galaxy to AWX
Install the requirements to use the python scripts hosted in this repository
Clone this repository
Edit the file variables.yml to indicate your details such as the ip address of your AWX, the git repository that has the playbooks you want to add yo your AWX, ….
You can now consume your playbooks with AWX GUI and AWX API!
AWX GUI is http://<awx_ip_address>
You can visit the AWX REST API with a web browser: http://<awx_ip_address>/api/v2/
Execute the file run_awx_template.py to consume your playbooks from AWX REST API.
AWX installation
This repository doesn’t install AWX. You still need to install AWX yourself.
Here’s the install guide
I am running AWX as a containerized application.
By default, AWX pulls the latest tag from docker hub.
Here’s how to use another tag. You need to do this before installing the AWX
$ nano awx/installer/inventory
$ more awx/installer/inventory | grep dockerhub_version
dockerhub_version=1.0.1
By default, AWX database is lost with reboots. You can change this behavior when you install AWX if you prefer AWX to keep its database after system restarts.
Issue the docker ps command to see what containers are running.
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5f506acf7a9a ansible/awx_task:latest "/tini -- /bin/sh -c…" 2 weeks ago Up About a minute 8052/tcp awx_task
89d2b50cd396 ansible/awx_web:latest "/tini -- /bin/sh -c…" 2 weeks ago Up About a minute 0.0.0.0:80->8052/tcp awx_web
6677b05c3dd8 memcached:alpine "docker-entrypoint.s…" 2 weeks ago Up About a minute 11211/tcp memcached
702d9538c538 rabbitmq:3 "docker-entrypoint.s…" 2 weeks ago Up About a minute 4369/tcp, 5671-5672/tcp, 25672/tcp rabbitmq
7167f4a3748e postgres:9.6 "docker-entrypoint.s…" 2 weeks ago Up About a minute 5432/tcp postgres
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5f506acf7a9a ansible/awx_task:latest "/tini -- /bin/sh -c…" 2 weeks ago Up 1 second 8052/tcp awx_task
89d2b50cd396 ansible/awx_web:latest "/tini -- /bin/sh -c…" 2 weeks ago Up 1 second 0.0.0.0:80->8052/tcp awx_web
6677b05c3dd8 memcached:alpine "docker-entrypoint.s…" 2 weeks ago Up 3 seconds 11211/tcp memcached
702d9538c538 rabbitmq:3 "docker-entrypoint.s…" 2 weeks ago Up 2 seconds 4369/tcp, 5671-5672/tcp, 25672/tcp rabbitmq
7167f4a3748e postgres:9.6 "docker-entrypoint.s…" 2 weeks ago Up 2 seconds 5432/tcp postgres
The default AWX credentials are admin/password.
install the requirements to use Ansible modules for Junos
In addition to the ansible modules for Junos shipped with AWX, there is also another modules library you can use to interact with Junos.
These modules are available in the Juniper.junos role on galaxy
These modules are not shipped with Ansible.
These two sets of modules for Junos automation can coexist on the same Ansible control machine.
Run these commands from awx_task container to download and install the Juniper.junos role from galaxy
Connect to the container cli:
docker exec -it awx_task bash
Once connected awx_task container, run these commands:
# more ansible.cfg
[defaults]
roles_path = /etc/ansible/roles:./
install the requirements to use the automation content hosted in this repository
The python scripts hosted in this repository use the library requests to makes REST calls to AWX.
Run these commands on your laptop:
sudo -s
pip install requests
clone this repository
Run these commands on your laptop:
sudo -s
git clone https://github.com/ksator/junos-automation-with-AWX.git
cd junos-automation-with-AWX
Define your variables
The file variables.yml defines variables.
On your laptop, edit it to indicate details such as:
The IP address of your AWX
the git repository that has your playbooks
the list of playbooks from your git repository you want to add to AWX
the Junos devices credentials
and some additional details
Run these commands on your laptop:
vi variable.yml
$ more variables.yml
---
# awx ip @
awx:
ip: 192.168.233.142
# awx organization you want to create
organization:
name: "Juniper"
# awx team you want to create. The below team belongs to the above organization
team:
name: "automation"
# awx user you want to create. The below user belongs to the above organization
user:
username: "ksator"
first_name: "khelil"
last_name: "sator"
password: "AWXpassword"
# awx project you want to create. The below project belongs to the above organization
project:
name: "Junos automation"
git_url: "https://github.com/ksator/lab_management.git"
# credentials for AWX to connect to junos devices. The below credentials belong to the above organization
credentials:
name: "junos"
username: "lab"
password: "jnpr123"
# awx inventory you want to create.
# indicate which file you want to use as source of the AWX inventory.
# The below inventory belongs to the above organization
inventory:
name: "junos_lab"
file: "hosts"
# awx templates you want to create.
# indicate the list of playbooks you want to use when creating equivalent awx templates.
# The below playbook belongs to the above source
playbooks:
- 'pb.check.lldp.yml'
- 'pb.check.bgp.yml'
- 'pb.check.interfaces.yml'
- 'pb.check.vlans.yml'
- 'pb.check.lldp.json.yml'
- 'pb.configure.golden.yml'
- 'pb.configure.telemetry.yml'
- 'pb.rollback.yml'
- 'pb.print.facts.yml'
- 'pb.check.all.yml'
- 'pb.check.ports.availability.yml'
An AWX team. The team belongs to the organization created above
An AWX user. The user belongs to the organization created above
Credentials for AWX to connect to junos devices. These credentials belong to the organization created above
An AWX project. The project belongs to the organization created above. The project uses playbooks from a git repository.
An AWX inventory. it belongs to the organization created above
An equivalent AWX template for each playbook from the git repository
Run this command on your laptop:
# python configure_awx.py
Juniper organization successfully created
automation team successfully created and added to the Juniper organization
ksator user successfully created and added to the Juniper organization
Junos automation project successfully created and added to the Juniper organization
junos credentials successfully created and added to the Juniper organization
junos_lab inventory successfully created and added to the Juniper organization
hosts file successfully added as a source to junos_lab inventory
wait 20 seconds before to resume
run_pb.check.lldp.yml template successfully created using the playbook pb.check.lldp.yml
run_pb.check.bgp.yml template successfully created using the playbook pb.check.bgp.yml
run_pb.check.interfaces.yml template successfully created using the playbook pb.check.interfaces.yml
run_pb.check.vlans.yml template successfully created using the playbook pb.check.vlans.yml
run_pb.check.lldp.json.yml template successfully created using the playbook pb.check.lldp.json.yml
run_pb.configure.golden.yml template successfully created using the playbook pb.configure.golden.yml
run_pb.configure.telemetry.yml template successfully created using the playbook pb.configure.telemetry.yml
run_pb.rollback.yml template successfully created using the playbook pb.rollback.yml
run_pb.print.facts.yml template successfully created using the playbook pb.print.facts.yml
run_pb.check.all.yml template successfully created using the playbook pb.check.all.yml
run_pb.check.ports.availability.yml template successfully created using the playbook pb.check.ports.availability.yml
The python script run_awx_templates.py makes REST calls to AWX to run an existing awx template.
Pass the template name as an argument.
Run this command on your laptop to consume an existing awx template:
# python run_awx_template.py run_pb.check.bgp.yml
waiting for the job to complete ...
still waiting for the job to complete ...
still waiting for the job to complete ...
still waiting for the job to complete ...
status is successful
# python run_awx_template.py run_pb.check.lldp.yml
waiting for the job to complete ...
still waiting for the job to complete ...
still waiting for the job to complete ...
still waiting for the job to complete ...
still waiting for the job to complete ...
still waiting for the job to complete ...
status is successful
# python run_awx_templates.py non_existing_awx_template_name
there is a problem with that template
Verify with the GUI
Delete AWX templates with automation
Run this command on your laptop to delete all AWX templates:
# python delete_awx_templates.py
Note: By default, AWX database is lost with reboots. You can change this behavior when you install AWX if you prefer AWX to keep its database after system restarts.
# tower-cli config
# User options (set with `tower-cli config`; stored in ~/.tower_cli.cfg).
username: admin
password: password
host: http://localhost:80
verify_ssl: False
# Defaults.
use_token: False
verbose: False
certificate:
format: human
color: True
description_on: False
Use the CLI
# tower-cli credential list
== =============== ===============
id name credential_type
== =============== ===============
1 Demo Credential 1
== =============== ===============
# tower-cli organization list
== =======
id name
== =======
1 Default
2 Juniper
== =======
# tower-cli organization --help
Usage: tower-cli organization [OPTIONS] COMMAND [ARGS]...
Manage organizations within Ansible Tower.
Options:
--help Show this message and exit.
Commands:
associate Associate a user with this organization.
associate_admin Associate an admin with this organization.
associate_ig Associate an ig with this organization.
copy Copy an organization.
create Create an organization.
delete Remove the given organization.
disassociate Disassociate a user with this organization.
disassociate_admin Disassociate an admin with this organization.
disassociate_ig Disassociate an ig with this organization.
get Return one and exactly one organization.
list Return a list of organizations.
modify Modify an already existing organization.
# tower-cli organization delete --help
Usage: tower-cli organization delete [OPTIONS] [ID]
Remove the given organization.
If --fail-on-missing is True, then the organization's not being found is
considered a failure; otherwise, a success with no change is reported.
Field Options:
-n, --name TEXT [REQUIRED] The name field.
-d, --description TEXT The description field.
Global Options:
--use-token Turn on Tower's token-based authentication.
Set config use_token to make this permanent.
--certificate TEXT Path to a custom certificate file that will
be used throughout the command. Overwritten
by --insecure flag if set.
--insecure Turn off insecure connection warnings. Set
config verify_ssl to make this permanent.
--description-on Show description in human-formatted output.
-v, --verbose Show information about requests being made.
-f, --format [human|json|yaml|id]
Output format. The "human" format is
intended for humans reading output on the
CLI; the "json" and "yaml" formats provide
more data, and "id" echos the object id
only.
-p, --tower-password TEXT Password to use to authenticate to Ansible
Tower. This will take precedence over a
password provided to `tower config`, if any.
-u, --tower-username TEXT Username to use to authenticate to Ansible
Tower. This will take precedence over a
username provided to `tower config`, if any.
-h, --tower-host TEXT The location of the Ansible Tower host.
HTTPS is assumed as the protocol unless
"http://" is explicitly provided. This will
take precedence over a host provided to
`tower config`, if any.
Other Options:
--help Show this message and exit.
Continuous integration with Travis CI
There is a github webhook with Travis CI
The syntax of the python scripts in this repository is tested automatically by Travis CI.
The files .travis.yml at the root of this repository are used for this.
This issue will track the progress of the new ZombieNet SDK.
We want to create a new SDK for ZombieNet that allow users to build more complex use cases and interact with the network in a more flexible and programatic way.
The SDK will provide a set of building blocks that users can combine in order to spawn and interact (test/query/etc) with the network providing a fluent api to craft different topologies and assertions to the running network. The new SDK will support the same range of providers and configurations that can be created in the current version (v1).
We also want to continue supporting the CLI interface but should be updated to use the SDK under the hood.
The Plan
We plan to divide the work phases to. ensure we cover all the requirement and inside each phase in small tasks, covering one of the building blocks and the interaction between them.
Prototype building blocks
Prototype each building block with a clear interface and how to interact with it
Add support to run test agains a running network (wip)
Add more CLI subcommands
Add js/subxt snippets ready to use in assertions (e.g transfers)
Add XCM support in built-in assertions
Add ink! smart contract support
Add support to start from a live network (fork-off) [check subalfred]
Create “default configuration” – (if zombieconfig.json exists in same dir with zombienet then the config applied in it will override the default configuration of zombienet. E.G if user wants to have as default native instead of k8s he can add to
🌎 GeoCLIP: Clip-Inspired Alignment between Locations and Images for Effective Worldwide Geo-localization
📍 Try out our demo!
Description
GeoCLIP addresses the challenges of worldwide image geo-localization by introducing a novel CLIP-inspired approach that aligns images with geographical locations, achieving state-of-the-art results on geo-localization and GPS to vector representation on benchmark datasets (Im2GPS3k, YFCC26k, GWS15k, and the Geo-Tagged NUS-Wide Dataset). Our location encoder models the Earth as a continuous function, learning semantically rich, CLIP-aligned features that are suitable for geo-localization. Additionally, our location encoder architecture generalizes, making it suitable for use as a pre-trained GPS encoder to aid geo-aware neural architectures.
Method
Similarly to OpenAI’s CLIP, GeoCLIP is trained contrastively by matching Image-GPS pairs. By using the MP-16 dataset, composed of 4.7M Images taken across the globe, GeoCLIP learns distinctive visual features associated with different locations on earth.
🚧 Repo Under Construction 🔨
📎 Getting Started: API
You can install GeoCLIP’s module using pip:
pip install geoclip
or directly from source:
git clone https://github.com/VicenteVivan/geo-clip
cd geo-clip
python setup.py install
In our paper, we show that once trained, our location encoder can assist other geo-aware neural architectures. Specifically, we explore our location encoder’s ability to improve multi-class classification accuracy. We achieved state-of-the-art results on the Geo-Tagged NUS-Wide Dataset by concatenating GPS features from our pre-trained location encoder with an image’s visual features. Additionally, we found that the GPS features learned by our location encoder, even without extra information, are effective for geo-aware image classification, achieving state-of-the-art performance in the GPS-only multi-class classification task on the same dataset.
Usage: Pre-Trained Location Encoder
importtorchfromgeoclipimportLocationEncodergps_encoder=LocationEncoder()
gps_data=torch.Tensor([[40.7128, -74.0060], [34.0522, -118.2437]]) # NYC and LA in lat, longps_embeddings=gps_encoder(gps_data)
print(gps_embeddings.shape) # (2, 512)
Acknowledgments
This project incorporates code from Joshua M. Long’s Random Fourier Features Pytorch. For the original source, visit here.
Citation
If you find GeoCLIP beneficial for your research, please consider citing us with the following BibTeX entry:
@inproceedings{geoclip,
title={GeoCLIP: Clip-Inspired Alignment between Locations and Images for Effective Worldwide Geo-localization},
author={Vivanco, Vicente and Nayak, Gaurav Kumar and Shah, Mubarak},
booktitle={Advances in Neural Information Processing Systems},
year={2023}
}