• Vinny_93@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      30 days ago

      It’s simple, really. If you don’t understand what the AI is telling you to code, you’ll spend five times what it would take a rawdogger to code it.

      If you write the stuff yourself from scratch you know your train of thought, you know what it means and you know what it needs to be better.

      Show me a head to head comparison of several coders doing the same assignment and let half of them use AI. Then we can assess the effects. My hypothesis is that the fastest one would have used AI. The slowest one wouldn’t have used AI but is a crappy coder. But there will probably will be non-AI coders quicker than some AI coders.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        30 days ago

        Show me a head to head comparison of several coders doing the same assignment and let half of them use AI. Then we can assess the effects. My hypothesis is that the fastest one would have used AI. The slowest one wouldn’t have used AI but is a crappy coder. But there will probably will be non-AI coders quicker than some AI coders.

        You need to introduce one more independent variable, which would be level of coding/ programing experience. If you really do already know what you are doing, and you know how to structure a good query, and know how to confirm if the code assistant/ AI got it right, it absolutely does make you much faster. Also, for really low level stuff, like defining helper functions, it rarely gets them objectively wrong, although they are often pretty… plain? or naive? So I would extend this hypothesis to cover the range of skill levels in individual coders and programmers. I think this is actually the striking difference. If you already know enough to code these things from scratch, it does make a huge difference.

        Here is an example that I would use (prompt), and the result. Prompt:

        I want a python function to pull imagery for a bounding box in latitude longitude, for a specific year, and a specific band order from this arcgis REST server (https://map.dfg.ca.gov/arcgis/rest/services/Base_Remote_Sensing). I always want a 512 by 512 pixel tile. The layer pattern for the server is NAIP_XXXX_ZZZZ where XXXX is the year, and ZZZZ is either 4Band, CIR, or NDVI, or missing (including the underscore) for RGB. I want the data as a spatially projected geotiff. Have the function query the server first to confirm the layer options before querying.

        Response:

        import requests import rasterio from rasterio.io import MemoryFile import numpy as np

        def get_arcgis_imagery(bbox, year, band_order, output_path):
        
        """
        Pulls imagery from the ArcGIS REST server for a given bounding box, year, and band order.
        
        Parameters:
        - bbox: (min_lon, min_lat, max_lon, max_lat)
        - year: YYYY (e.g., 2020)
        - band_order: '4Band', 'CIR', 'NDVI', or '' for RGB
        - output_path: Path to save the output GeoTIFF
        """
        base_url = "https://map.dfg.ca.gov/arcgis/rest/services/Base_Remote_Sensing"
        service_url = f"{base_url}/NAIP_{year}_{band_order}/ImageServer" if band_order else f"{base_url}/NAIP_{year}/ImageServer"
        
        # Confirming the layer exists
        response = requests.get(f"{service_url}?f=json")
        if response.status_code != 200:
            raise ValueError(f"Layer NAIP_{year}_{band_order} does not exist on the server.")
        
        # Defining export parameters
        params = {
            'bbox': f"{bbox[0]},{bbox[1]},{bbox[2]},{bbox[3]}",
            'bboxSR': 4326,
            'size': '512,512',
            'imageSR': 3857,  # Projected coordinate system
            'format': 'tiff',
            'f': 'image'
        }
        
        # Requesting the imagery
        export_url = f"{service_url}/exportImage"
        image_response = requests.get(export_url, params=params)
        if image_response.status_code != 200:
            raise ValueError("Failed to retrieve image data.")
        
        # Saving the image as a GeoTIFF
        with MemoryFile(image_response.content) as memfile:
            with memfile.open() as dataset:
                profile = dataset.profile
                data = dataset.read()
        
                with rasterio.open(output_path, 'w', **profile) as dst:
                    dst.write(data)
        

        This function appears in just a second, and I don’t even need to consider its details. Its well written, commented, and it just works.

        • LaMouette@jlai.lu
          link
          fedilink
          arrow-up
          1
          ·
          29 days ago

          But this is something a third grader could do. This is not engineering work at all. This may not be production read code as it should also consider the context where this will be inserted, what is your strategy for handling errors etc … And last but not least I bet this is the first stackoverflow post content when asking the same question to google. Not worth burning the planet in my opinion.

  • Rogue@feddit.uk
    link
    fedilink
    arrow-up
    1
    ·
    30 days ago

    If you’re developing something that has been written a million times before such as a user authentication API then yes you can just let ChatGPT do it for you.

    But as soon as you’re writing something new or niche any LLM is going to mostly spew useless nonsense.

    I’ve been working with Bevy a lot lately and because it’s new and iterating quickly there are a lot of breaking changes between versions. AI simply isn’t able to cope with that. It was trained with years old data and is incapable of adapting to the new way of doing things