Keep screen aspect ratio with different resolutions using libGDX

There’s something in that “Screen Resolution” game menu that possesses me. I’ve always fancied making a game with different screen resolutions, but the task is far from trivial. These notes are the result of a weekend spent looking for the solution (with help from the JGO community). You can download the source code from here.


Imagine you are developing a game and start supporting the 480×320 resolution because it fits nice in your smartphone. You align the menus, place the sprites, and do some nasty hacks (that we all have done sometimes) to make your game look pretty. In the end, you have a game that has been developed, literally, for your own phone! (or phone screen resolution). It will look distorted in other phones with different screen resolutions 🙁

What do you want to is to support multiple screen resolutions without hardcoding all the layout for every single screen resolution that exists (there are lots of them).

(My) Solution

The solution I’ve found it’s not TEH solution, but it works good enough for me.

I’m working with libGDX. This library has a OrthographicCamera class (doc, code) that fits nice for 2D games. This class is responsible to 1) define the volume of the game scene (which in OpenGL argot is called frustum) and 2) to project it orthographically into a plane: the scene image. In addition, libGDX also provides a wrapper to the OpenGL function glViewport(), which transforms the scene image obtained with the camera class into the device screen.

The plan is the following:

  1. Define a virtual resolution to work with (align menus, place sprites, etc.).
  2. Set the camera to use the virtual resolution.
  3. Use glViewport() to adjust our scene image to the physical resolution of the device screen (keeping the aspect ratio of course).

To define the virtual resolution, it is fine to define static final fields in your AplicationListener game class (I’m using libGDX argot). The camera, a Rectangle defining our viewport, and the SpriteBatch, which all of them we will be using later, are also (non-static) fields of the class.

public class MyAwesomeGame implements ApplicationListener
    private static final int VIRTUAL_WIDTH = 480;
    private static final int VIRTUAL_HEIGHT = 320;
    private static final float ASPECT_RATIO =

    private Camera camera;
    private Rectangle viewport;
    private SpriteBatch sb;

When our game starts, it will first execute the method create() and then resize(int, int) with the width and height of the window as input parameters. In create() we should initialize all the fields required further. In particular, we will initialize the camera and the SpriteBatch (canvas of each frame).

    public void create()
        sb = new SpriteBatch();
        camera = new OrthographicCamera(VIRTUAL_WIDTH, VIRTUAL_HEIGHT);

In resize() we should setup the Rectangle that we will be using later to set the viewport. And here it is the trick. Let’s see this function slowly. First we declare and initialize some local variables.

    public void resize(int width, int height)
        // calculate new viewport
        float aspectRatio = (float)width/(float)height;
        float scale = 1f;
        Vector2 crop = new Vector2(0f, 0f);

They are quite intuitive, for instance, aspectRatio holds the ratio width/height of the device screen (physical resolution), scale is the factor to which scale our scene image, and crop (do not confuse with crap) is the amount of pixels to be cropped from the viewport in order to keep the aspect ratio of the scene image.

Now, if aspectRatio is greater than the virtual aspect ratio it is because the physical resolution is wider (proportionally) than the virtual resolution. Therefore, we should match the height of both resolutions (virtual and physical) and crop in the X direction since our virtual scene image wont fill the whole screen. Conversely, if aspectRatio is lesser than ASPECT_RATIO then we should match the width of both resolutions and crop in the Y direction.

        if(aspectRatio > ASPECT_RATIO)
            scale = (float)height/(float)VIRTUAL_HEIGHT;
            crop.x = (width - VIRTUAL_WIDTH*scale)/2f;
        else if(aspectRatio < ASPECT_RATIO)
            scale = (float)width/(float)VIRTUAL_WIDTH;
            crop.y = (height - VIRTUAL_HEIGHT*scale)/2f;
            scale = (float)width/(float)VIRTUAL_WIDTH;

        float w = (float)VIRTUAL_WIDTH*scale;
        float h = (float)VIRTUAL_HEIGHT*scale;
        viewport = new Rectangle(crop.x, crop.y, w, h);

Finally, we just have to modify the render() method (which is used to render our scene, of course) to update the camera, set the viewport, and draw our objects/entities.

    public void render()
        // update camera

        // set viewport viewport.x, (int) viewport.y,
                          (int) viewport.width, (int) viewport.height);

        // clear previous frame;


And that’s it. Let’s see it in action.

Some images

To illustrate this tips I’m rendering a scene that consists on two rectangles. One green that fills all the scene (just to know where exactly our scene image is), and one square red just to detect visually aspect ratio violations. We use 480×320 as our virtual resolution, as our smartphone uses it natively. Therefore, in our phone we should see everything and without distortion, just as this screenshot I just took:

Now imagine I send this awesome game to my friend @notch (any similarity with real characters/persons is fictional) which is really rich and has a smartphone with greater resolution. He will see this flawed game:

Notice that the square has been distorted into another rectangle (non-squared). My friend is loosing part of the feeling of my game! And most important, the artist that is making such awesome graphics is really pissed off…

Using the method of this tutorial he will just get the right game:

Ok, it is true. He’s not using his whole smartphone screen (btw, who told him to spend that much money in a fancy new smartphone in the first place?) but at least the aspect ratio is correct and the game graphics artist is happy again.

Further approximation to perfection

I have discovered nothing new, but at least I won’t doubt again how to perform this tedious but mandatory task. You must know that there are, for sure, better approaches to solve the resolution problem. For instance, I just came up with the idea of having two/three versions of the game with different aspects ratios (say 4:3, 16:9, and 16:10). Then, you viewport the layout corresponding to the aspect ratio that is closer to the physical aspect ratio, and hence, minimizing the ugly black bands.

If you have any comment/suggestion/praise/curse, do not hesitate to leave a comment here or say something in Twitter.


View all posts by

73 thoughts on “Keep screen aspect ratio with different resolutions using libGDX

  1. First of all, thanks for the post. For many projects this solution is more than enough for the multiple screen sizes, same aspect ratio problem.

    I’ve been trying to add this fix to my project but I bumped into the following problem. The apply method receives GL10 or GL11 but it just crashes if you’re using GLES 2.0. Is there any other way of using a virtual screen size and applying the transform to the camera with GLES 2.0?

    Thanks a lot :-D.

    1. Hi David!

      I’m not an expert about OpenGL 2.0 ES, so take care about my answer, please.

      That said, I think the camera.apply() method does not work with GLES 2.0 because the OrthographicCamera class from libGDX doesn’t support it. From libGDX wiki [1], when they explain the render() method, point #2 says “Obtains the OpenGL instance. It has to be version 1 as version 2 is not supported for OrthographicCamera.” I don’t know much about libGDX (already moved into other framework) so I don’t know the exact reason why they do not to support it. I’m sure in their forums [2] or in java-gamming forums [3] you will find the answer of any further questions about it.

      Anyway, that’s regarding why the camera.apply() method is not working on GLES 2.0. As far as I know (remember I know little, so careful) there’s no problem to change the viewport matrix in GLES 2.0, which is the line of code that does most of the magic. I recommend you to read Chapter 3 of the BIG Red Book [4]. It contains a long explanation of OpenGL “scene viewing” theory. Specially, pay attention to the viewport transformation, as it is the one responsible for scene-screen translation.

      I hope this clarifies something. Thanks for reading my blog 🙂


      1. Thanks for the answer and sorry about the double posting :-(.

        In the end I solved my problem by asking in the IRC channel (I already asked before in the forums and the gamedev StackExchange site). My problem is that I was setting the camera to Orthonormal with the real size everytime resize was being called so the virtual size was being ignored. In GLES 2.0 you set the transformation through the spritebatch.setTransformation(camera.combined) call.

        Thanks a lot again anyways, your post helped me a lot, now everything is working.

        By the way, what framework are you working with now? Just out of curiosity.

  2. I’m happy to hear you solved the problem!

    Right now I’m in a crossroads. I have learned bits about jMonkey for Java (which is quite promising) and I like it, though it is more engine than framework. I’m also thinking to move into C++ for a bigger personal project (I haven’t decided yet, but I think I’ll use SFML in C++, which I have previous experience with).

    I wish you luck and I hope to hear from your games sometime!

    1. SFML seems quite powerful but have you tried the simpler yet efficient Gosu?

      Well, thanks. I’ve been developing games for a while and have some experience with SDL, Gosu, XNA, Ogre3D and now a little bit of CryEngine. I just wanted to try mobile development.


  3. thanks for great solution ! but i have one more issue.. Is there a simple way to recalculate and getY function to avoid the black areas ?? (the x and y coordinates should begins and end on green area)

    1. I’m not sure I’ve understood you correctly.

      Do you want to avoid the black areas and keep the aspect ratio of the image? You cant just use crop = new Vector2f(0f, 0f); for all cases, but notice you will lose some information. Part of the image will be outside the device screen, so not rendered.

      Or do you not care about aspect ratio and just want to stretch the image to fill the whole device screen? Then just use viewport = new Rectangle(0, 0, width, height); in the resize(width, height) method. But then I don’t see the point of reading this tutorial…

      I hope this helped 🙂

  4. thanks for Yours reply.. sorry i don’t explain my problem clearly.. I want to grab the x and y coordinates of mouse/screen touch.. But i want to recalculate these point to avoid black screens

  5. ok
    camera.unproject() translate my Gdx.input.getX() result to my VIRTUAL_WIDTH value, but I want to cut from this projection black areas created when the VIRTUAL_WIDTH is fitted to Greal screen/windows size (

  6. What prevents you from doing so? Just check if the input is inside your interaction area. I think some function like this can be used:

    boolean isInsideInteractionArea(x, y)
    return (x >= 0 && x = 0 && y < VIRTUAL_HEIGHT);

  7. Thank You very much. but im still looking for something another.

    In example if when i want to move sprite from VIRTUAL_WIDTH = 0 left to (in example) VIRTUAL_WIDTH = 700 I have to make touch slide from Gdx.input.getX() = 0 to (in example )Gdx.input.getX(800) … I want to make touch slide only from VIRTUAL_WIDTH = 0 to 700 to accurate moving my sprite. This is my problem

  8. But I think that’s the whole purpose of using the unproject() method of the camera!

    When you unproject the 3D vector of the Gdx.input event, (Gdx.input.getX(), Gdx.input.getY(), 0f) for example, it will give you the coordinates of the click (or touch) on the virtual screen coordinate system: (0f, 700f, 0f) or something similar.

  9. The unproject(Gdx.input.getX(),Gdx.input.getY(),0f) gives me the coordinates of virtual screen, but with black areas.

    In example when I have VIRTUAL_WIDTH defined as 800 and my real screen width is 900 I got two black areas on left and right side of my VIRTUAL SCREEN due to keep proper aspect ratio.

    For the unproject(Gdx.input.getX(),Gdx.input.getY(),0f) method the coordinates of my real screen is from 0 to 800 (the same as VIRTUAL) but WITH black areas…

    I’m looking for a solution to limit the unproject(Gdx.input.getX(),Gdx.input.getY(),0f) to my REAL game area WITHOUT black areas.. (on left and right and also if them will be appear on top and down)

    to recalculate my X coordinates to avoid black areas i made function like this:

    { touchPos.set((Gdx.input.getX()-(*, , 0);

    but i think it not the best solution because it a few ratio possibilities and this IF is only for one of them

  10. if (VIRTUAL_WIDTH <
    touchPos.set((Gdx.input.getX()-(*, , 0);

  11. Have you read this unproject() method [1] in which you can specify your own viewport?

    I think it will return you negative values for touches in the left black bar and > VIRTUAL_WIDTH for the right black bar. The whole purpose of unproject is to go from screen coordinates to OpenGL coordinates.


  12. Ey!
    Thanks for the post!
    However I sill have a problem
    I already stretch the screen to my game, but the big problem is that the inputs don’t get resized!!!!,
    so the inputs don’t works.

    Do you found any solution to that problem?

    See ya

    1. I don’t think I completely understand what do you mean by “inputs”. I suppose you mean the positions of the screen touches returned by the Gdx.input.get*() methods.

      If that’s the case you want to read my previous comment. You have to use the unproject() method described in [1] using the viewport you specified. This translate from device coordinates to camera coordinates.

      Please, read my previous conversation with @marcin.


      1. Wow, that was a quick answer.

        I solved it!

        The problem was that my game is for android. I changed the screen but not the relation with the inputs, So the methods Gdx.input.getX() and getY() didn’t work

        I solved it with this:

        In the show:

        //I want to stretch to 480×320


        tiledMapHelper.getCamera().setToOrtho(false, screenWidth, screenHeight);

        and for the colisions with rectangles I did

        In the render:

        Vector3 touch = new Vector3(0,0,0);
        tiledMapHelper.getCamera().unproject(touch.set(Gdx.input.getX(), Gdx.input.getY(), 0));

        if(rectangle.contains(touch.x, touch.y)

        // insert logic


        Thank you very much!!!!!

        PD: Sorry for my bad english

  13. What if I set the virtual screen resolution at 720×1280 (VIRTUAL_WIDTH X VIRTUAL_HEIGHT) under “MyAwesomeGame.class”? Was the method used in the tutorial will worked correctly using that code of yours for keeping the aspect ratio?

    1. I think it should work, but I haven’t tried it. Notice that all the calculations are made according to the ASPECT_RATIO value, which in the case of higher-than-wider screens will be smaller than 1. But the rest is ASPECT_RATIO-value agnostic.

      If you try it, please care to post your comments here for future readers 🙂 Thanks!

  14. I think this code for aspect ration of the screen works only in landscape screen orientation. Is there a solution for the portrait screen orientation?

  15. Again, I think it should work. Notice that the code in resize() method handles both the > and < cases. Also notice that we fill crop.y in the "landscape" case instead of crop.x. Where is exactly not working for you?

  16. Might be at the width and height I think. Your code seems working. There is nothing much on the screen orientation at the android manifest unless I have to. I figure it out that the only solution is to adjust either the width or the visual length to its higher value, depending on the screen orientation via Android Manifest. I realize that the crop.x and crop.y from the Rectangle object are used as some like a position for x and y coordinate, so, I changed it to make it. I just observed it. I’m using the device, Google Nexus 7.

  17. I don’t know how to check if your device has changed orientation. This code has been designed without that in mind.

    One possible solution is to link the “orientation changed” event with resize() with width and height swapped.

    A better solution could be to have a boolean flag to note if the device is portrait or landscape and change the behavior of the resize() method accordingly.

  18. I found a different resolution for this problem:

    In the Application listener’s create method store the initial Game’s width and height. Apply it on the gl viewport upon resize.

    private float saved_width;
    private float saved_height;
    private OrthographicCamera cam;
    private TextureRenderer t;

    public void create() {
    saved_width =;
    saved_height =;
    cam = new OrthographicCamera();
    cam.setToOrtho(false, width, height);

    public void render() {
    //render on ‘t’

    public void resize(int width, int height) {,0,saved_width, saved_height) ;

    Should work just fine with any resolution.

  19. @Carsten: Hmmm, I’m not sure your method will work when you manually resize the game window. You will always have a ‘saved_width’x’saved_height’ pixels render rectangle independently of the size of the window, which could overflow or underflow the render rectangle.

    Also, this method doesn’t allow you to specify an initial aspect ratio and master resolution (‘VIRTUAL_HEIGHT’, ‘VIRTUAL_WIDTH’, and ‘ASPECT_RATIO’). It will use the initial screen size as these parameters.

  20. I tried this but rectangle and SpriteBatch both can’t be resolved to a type. How to define those/where are they defined?

  21. @soy_yuma
    Thanks, i had included it wrong. I am still starting with Android. Now I get can’t convert OrthographicCamera to Camera on the line:
    camera = new OrthographicCamera(VIRTUAL_WIDTH, VIRTUAL_HEIGHT);

    Using this to learn 🙂

  22. That’s pretty strange. According to the latest nighties of libGDX, OrthographicCamera extends Camera, so there should be no problem assigning an OrthographicCamera to a Camera variable.

    Are you sure you have included the Camera and OrthographicCamera from libGDX? I.e., do you have in the beginning of your java file something like this:


  23. Hi! I’ve found your code very usefull! It works pretty well in the desktop version, but I don’t know why, it doesn’t render well in the android. I have uploaded four images for a better description.

    Any advice about what can it be? Thanks a lot!!

  24. Hi Archison! I never tested it in Android. It’s pretty difficult to diagnose your problem just by looking at the screenshots you posted in imgur. It also depends on the libGDX version you’re using.

  25. @soy_yuma
    Thanks a lot for answering! After hours of looking and redoing stuff I’ve come to a solution. It’s somehow working. Now I’m dealing with the unproject to solve the problem with the input. I’ll keep coming to your blog! Bye!

  26. hi.

    i have strange problem if thr real device resolution is smaller than VIRTUAL_WIDTH and VIRTUAL_HEIGHT.

    In that situation sprites are drawing in correct cords, but object from ImmediateRenderer (pure GL, objects like rectangle and circle) are drawings in bad cords (like without scaling)

    regards marcin

  27. ok i have the sam problem with gl primitives in situation when VIRTUAL_WIDTH and VIRTUAL_HEIGHT is smaller than real device screen resolution

  28. I have same problem of marcin (post 17, 18). Thank you verymuch, soy_yuma.

    But I got another problem, same of above, but this time I want to take X and Y of STAGE

    I have screen X, Y, use camera.unproject(vector, viewport) to get Camera X, Y.

    But this time I dont wanna get Camera X, Y. I wanna get Stage X, Y so I use stage.screenToStageCoordinates(). And same problem appear. X and Y of stage is not right

  29. Hey @tr4788! Thanks for asking!

    It’s been a while since I last used libGDX, so take my words with care…

    I’m not sure I’ve understood you correctly. What I understood is that 1) you have a separate camera doing all the dirty tricks of this post, 2) you have an additional stage where you place your libGDX actors, 3) the dirty tricks work with camera.unproject() but don’t with stage.screenToStageCoordinates().

    I haven’t used stages in my life, but if that’s your case, then you probably want to use the camera structure inside the stage. Make sure you call the stage.setCamera() method with your camera and to set the viewport using stage.setViewport(). Then notice that stage.screenToStageCoordinates() is an easy way to do a stage.getCamera().unproject().

    Once you set up your stage with the right viewport/camera, you wont need a separate camera for all the dirty tricks.

    I hope this helps!

  30. You are so awesome, soy_yuma. You saved me 2 time 😛

    So, they key words are here: “Then notice that stage.screenToStageCoordinates() is an easy way to do a stage.getCamera().unproject()”.

    Instead of using stage.screenToStageCoordinates(), I just using above method and pass viewport into it.

    So now:
    – I can create a game which is have main camera for my dirty work, and I can detect exactly each pixel when user touch to screen.
    – I can add a stage inside my camera (in running time and delete that after too) and do something else, and I can detect each pixel in that stage too

    See this:

    And can you help me at this problem:

  31. I’m not sure, @tr4788. I think it’s best to use libGDX as their developers expect you to do. Instead of having a (global?) camera to make the dirty work, you should configure the Stage camera to the viewport you like. Then all actors and entities that depend on the Stage will be transformed in a predictable (and correct) way. If you fail to do so you may end with actors that are transformed in unexpected ways (like your question in libgGDX forums).

    The method described in this post was intended to be an *illustration* of the general method; just a minimum working example to demonstrate how it actually works. You should always follow libGDX documentation in order to write quality code.

  32. 🙂

    I coded by their rule. But when I change glViewPort (for letterboxing – black bar), everything is messed up.

    My question on the forum, I did nothing wrong (I think so)

    1. I just create my world (world camera: 720×1280, config glViewPort to display on various screen )
    2. I add a stage (set stage viewport to my virtual size: 720×1280)
    3. I create actor, add listener for that actor to catch touch event. (actor.addListener…)
    4. Then add an actor to stage.

    If I did not config glViewPort in step 1, actor listener is working good but my game will alway strech to full screen of any device (doesnt have any black bar)

    If I config glViewPort to show black bar, actor listener is working wrong (on detect location on itself)

    That is that i think:
    In their code, on actor.addListener method, they code like this: get X and Y in screen and convert it, check if X and Y are inside actor’s boundary or not. If it is, fire touch event. If it is not then do nothing. But they use camera.unproject(vector) instead of camera.unproject(vector, viewport) to convert so their touch area is wrong.

    So sorry cause of my bad English.

  33. @tr4788, I’ve answered your question in libGDX forum. I think NateS has already given you the key to solve it. Unfortunately he didn’t specified much so you probably miss it.

    The thing is that you aren’t configuring the Stage viewport correctly. You’re actually telling the stage to stretch the viewport to the whole screen even if that means changing the aspect ratio. That probably doesn’t change the rendering code, which may be controlled by glViewport, but it does affect other parts like the touch detection.

    Please, read my answer to your forum question. We can keep discussing about this in libGDX forums, where this discussion has more sense.

  34. its important to keep in mind that eg a ball will look like an ellipce on other screens, so that eg in native android sdk there are special folders in the res folder for each screen and dencity.
    Sometimes more efficiently to put some pictures for all possible screen resolutions into separate folders within assets I think :/

  35. Sep03,2013

    I have this java snippet :

    ### ####
    import com.badlogic.gdx.Gdx;

    public class IMR10 extends GdxTest {
    ImmediateModeRenderer10 renderer;
    int M = 800;
    float bigI = 0;
    SpriteBatch batcher;

    public void create () {
    this.batcher = new SpriteBatch();
    this.renderer = new ImmediateModeRenderer10();

    public void render () {
    if(bigI<M) { drawStuff(); } else {; }

    public boolean needsGL20 () { return false; }

    public void drawStuff() {
    for(int i=0;i<10 && bigI < M;i++,bigI++) {
    renderer.color(1.0f, 0f, 0f, 1);
    renderer.vertex(bigI, 200.0f, 0.0f);
    # End ##

    # #
    initialize(new IMR10(),cfg);

    # End MainAct…. ###
    When I launch this on android emulator I see something like this :

    ……….{emptySpace}……….{emptySpace}……….{emptySpace}……….{emptySpace}………. etc

    And every time I click on the screen the whole line seems to move right.
    The breaks between the drawn pixels ({emptySpace}) are not of the same length as the
    drawn pixels . They’re about 1.5 or 2.0 times their length ( 15 or 20 px in this case).

    My Questions:

    Why are there breaks between the drawn pixels ? How can I prevent this ?

    When the dots are being drawn the whole bunch of dots drawn previously
    appear to be moving as if the entire lots of pixels drawn before are being
    re-drawn and re-drawn at new (x,y) co-ords.

    Can someone help me understand where I am going wrong and how to
    get a predictable line across the screen ?

    I eventually wish to draw an XY plot (GL_LINES) between two vertices where
    x increases by one each time & y can vary from +40 to -40 and the progress of
    the plot can be easily seen as the x-co-ord increases.(Imagine a heartbeat.).
    The above moves too but it is *not supposed to* !

    I would be really grateful for any help in understanding the fundamental concepts
    of openGL involved in what I am doing.

    Tks in Advance .

  36. @bns: The dot lines you see are not part of the code I’ve shared. I believe it’s something that outputs the Android emulator. I’m sorry I can’t be more helpful here. (also sorry for not approving your comment sooner, I’ve been so busy these weeks…)

  37. HI i have a Screen based game but i am a complete rookie so i have some question:
    1. I must copy the code in the Only in main or in all my
    2.How the sprites resizes? i must use a function or what? i am looking at your code and i can’t understand almoust nothing 🙁 some

    Sorry if i bother.

    Advise Thanks.

  38. @Constantin Eduard The code should be part of the Class that handles window resizes and that commands everything else to get drawn. In my code that’s the MyAwesomeGame (retrospectively that’s a silly name).

    To resize sprites you need to call the appropriate methods their classes provide. Something like setWidth(), scale() or similar. It all depends on what you’re using.

  39. It is nice idea using virtual view port and makes the screen stretching. However, when I see yor two screens, still there is a black space appearing before and after green rectangle. Is the preferred solution? I want to use your technique so that whatever I see on my 480:320 should be the same on other bigger devices.

    1. You either had black bars to match different-ratio resolutions, or you have asymmetrical stretching (and hence your squares become rectangles). It’s a tradeoff. You must choose what you prefer 🙂

      1. My understanding here is if we stretch the view port, let us say you have a bigger resolution than 480×320, in that case, definitely the object will look bigger. I want to get this kind of look and feel. Is this possible to achieve this with your code?

        1. Say you have a device with resolution 480×320 and you like the way your applications shows in there. If you move to another device with a symmetrically scaled resolution (for example 2x: 960×640); then you will see the same proportion for all graphics. You will keep seeing squares as squares. And no extra black bars.

          If you move to a device with asymmetric scale (say 480×420), then, with the post code, you will see squares as squares, but with black bars at the sides.

          Of course, there are other solutions apart from this code (like stretching asymmetrically or trading the black bars for occluding part of the view). It’s all up to you 🙂

  40. This is great! But I have a small problem. The Input for my cam is all fucked up now :/ So I can’t click on any buttons if i resize the window, any ideas?

Leave a Reply

Your email address will not be published. Required fields are marked *