General-ENG

Xab3r

Member
Released 5691 in Alpha channel (Settings => Update channel). This newer version contains embedded CVAT Automatic Annotation Tool, which is a program, that is expected to be used in conjunction with ML Search Trigger to get more from machine-learning.
https://wiki.eyeauras.com/en/CVATAAT/getting-started

Starting with ML is hard and at the same time capabilities which it provides are unmatched, so I want to bring as many new guys into it as possible, CVATAAT is one of the steps - it makes process of training models much-much easier.
Also it has built-in automatic annotations (which can use older version of model to annotate new images), so you'll get good-enough model much faster

Please feel free to ask any questions about it.
 

Dildojo

Member
In reply to Xab3r: [yeah, initially that was kinda the idea. I was going to implement event<>action similar to how WeakAuras does it.
This works quite well for things like automatic potions, buff alerts/reactions and any other kind of "reactive" stuff. But this approach gets exponentially harder with increased number of conditions, that is where scripts come into play. It is very important to "feel" that moment when you better switch to scripting capabilities, otherwise you'll just drown in dozens of conditions.
I'll release new feature called behavior trees(https://discord.com/channels/636487289689866240/668842459425538069/1163235360038473818) in about 2-3 weeks, they should work brilliantly for rotations and any other kind of things with complex conditional logic.]

This will be very cool to see in action then as i know many people already use a pixel rotations tool
 

linqse

Member
You know that YouTube has a pause button, right? It's useful for following along step-by-step. 😂 The spacebar is also a shortcut for pausing. 😉 And yes, if you're watching a guide in Russian, you can use subtitles and translate them into your primary language. I don't see any problems with that. Or you want read it on e-book reader ? :DDD
 

Xab3r

Member
In reply to Dildojo

there is no black magic -such as “I will show it video on fishing an it will be able to fish”. We are years away from this. What you can do is use that video to train model to detect state(e.g. “ready-to-pull”) or find some object on a picture (e.g. “fish is in this spot on a screen”) or even train it to distinguish walkable terrain from non-walkable
 

Xab3r

Member
after you’ll get the model you will still have to put together a bunch of auras or scripts to use model and build an actual fishing logic
 

Xab3r

Member
Prepared a page https://wiki.eyeauras.com/en/CVATAAT/why-use-ml
It has some information about potential use-cases of ML and comparison of different modes
 

Xab3r

Member
In reply to Dildojo

I'll look into it, but as of now, most of these "I will AI generate you something!" are a steaming hot garbage, which works good only on a pre-prepared scenarios
 

Xab3r

Member
yeah, tried it. It literally just captures screenshots when you click on something and combines them into a slideshow 😄
 

Dildojo

Member
In reply to Xab3r: [I'll look into it, but as of now, most of these "I will AI generate you something!" are a steaming hot garbage, which works good only on a pre-prepared scenarios]

well this chick programmer was showcasing it on her youtube channel and it seemed very simplistic
 

Dildojo

Member
In reply to Xab3r: [yeah, tried it. It literally just captures screenshots when you click on something and combines them into a slideshow 😄]

yes exactly sometimes simple is better
 

illone

New member
Hi, I am new here, is it possible to create a Path of Exile bot with the program, that can open a map and play the map by itself?
 

Xab3r

Member
In reply to illone: [Hi, I am new here, is it possible to create a Path of Exile bot with the program, that can open a map and play the map by itself?]

it's not a simple task and will involve C# coding, this is how steps would look like:
1) Add WebUI overlay, which will be serving as a main part of your bot. It will be basically a program running inside EyeAuras which will draw minimap, show settings to user, etc
2) Train a segmentation model using ML - take 100-200 screenshots of a minimap and mark areas which are passable, this will be used for movement purposes. Configure ML Search trigger to process minimap.
3) Train another model which will track enemies. Possibly the best way is find health bars on a screen.
4) Throw in color searches/image searches/ml searchers which will gather current state of character - skills, health/mana, etc
5) In your WebUI overlay you'll be able to access all triggers configured (minimap, skills, health, etc) - now you will be able to write logic in C# which will do the actual movement, clicks, target selection, etc
Done

The main profit of doing it in eyeauras is that you potentially can skip A LOT of code which you'll have to write otherwise (machine learning inference, input simulation, anticheat protection, image capture and processing, etc), but it will still require writing C# code and some technical skills
 

Xab3r

Member
Theoretically, after I'll add behavior trees, amount of code you'll have to write for such a bot will drop drastically, but we'll see it in practice closer to end of the year
 

HomHeHum

Member
In reply to Xab3r: [Theoretically, after I'll add behavior trees, amount of code you'll have to write for such a bot will drop drastically, but we'll see it in practice closer to end of the year]

How long until we can have presets generated based on ai commands to generate the tasks at hand, i guess there is already record steps option?
 
Top